Tag Archives: news

Firefox 44 Released With Bug Fixes for Better Media Playback

Firefox Banner

Firefox 44, the latest version of Mozilla’s hugely popular open-source web browser, is now available for download. 

Firefox 44 brings a minor set of changes to the web browsing table, with development having focused on security improvements, bug fixes and media playback tweaks rather than ‘new features’.

Firefox 44 Changes & New Features

firefox insecure warning page

The biggest user-facing change in Firefox 44 is one you don’t want to see: a redesigned warning page. 

Firefox shows the clearer, more concise “Your connection is not secure” page for websites with certificate errors or an insecure connection. Mozilla explains further:

“When Firefox connects to a secure website it must verify that the certificate presented by the website is valid and that the encryption is strong enough to adequately protect your privacy. If the certificate cannot be validated or if the encryption is not strong enough, Firefox will stop the connection to the website and instead show an error page.”

Click on ‘Advanced’ to see further information about the issue (previously hidden behind ‘Technical Details’) and to the load the site in spite of the warning(s) thrown (Advanced > ‘Add Exception’)

omg ubuntu firefox extension

Unsigned add-ons, like this one, can still be run

Other Changes:

Enforced Add-Ons Policy Deferred

Add-on signing was due to be enforced in Firefox 44, by removing a hidden toggle to override it.

A last minute change has deferred the removal add-on signing to Firefox 46, as Mozilla explains:

In Firefox 43, we made it a default requirement for add-ons to be signed. This requirement can be disabled by toggling a preference that was originally scheduled to be removed in Firefox 44 for release and beta versions (this preference will continue to be available in the Nightly, Developer, and ESR Editions of Firefox for the foreseeable future).”

“We are delaying the removal of this preference [in beta & stable builds] to Firefox 46.”

The toggle to load unsigned add-ons is still present, news that will please extension developers and all four users of our (severely antiquated) OMG! Ubuntu! Firefox add-on 😉.

(As soon  Firefox support for Chrome extensions arrives we’ll port over our nifty Chrome add-on).

Download Firefox 44 for Linux

If you’re running a supported version of Ubuntu¹ you will receive this update automatically via Ubuntu Software Updater.

Since Ubuntu updates aren’t pushed out in realtime you may need to run a manual check for the update(s).

  1. Open Dash
  2. Search for ‘Software Update
  3. Click ‘Check for New Updates’  button

In addition, Firefox  is available to download on all platforms directly from the Mozilla website.

Download Mozilla Firefox for Linux

¹Ubuntu 12.04 LTS, 14.04 LTS, Ubuntu 15.10

This post, Firefox 44 Released With Bug Fixes for Better Media Playback, was written by Joey-Elijah Sneddon and first appeared on OMG! Ubuntu!.

Read More

Google Chrome Axes Support for ALL 32-bit Linux Distros

chrome drops linux 32 support

Support for Ubuntu 12.04 LTS is Also Being Retired

Google Chrome is to drop support for all 32-bit Linux distros from March, 2016. 

The change, which brings the platform in line with that of Mac OS X, will apply to all x86 Linux builds, regardless of distribution or version number.

Users affected will still be able to use Chrome after the axe has fallen, but they will no longer receive any updates.

In a double-whammy, March will also see Google Chrome stop supporting Ubuntu 12.04 LTS (which will receive critical and security bug fixes from Canonical until mid 2017).

‘Ubuntu users  are advised to upgrade to a 64-bit version of Ubuntu 14.04 LTS or later’

From this March only 64-bit versions of Ubuntu 14.04 LTS (or later) will receive new versions of the browser from Google .

To run a supported version of Google Chrome Precise users are advised to upgrade to a 64-bit version of Ubuntu 14.04 LTS (or later).

Why Is Google Dropping Support?

The small Google Chrome Linux team can’t support all versions of Ubuntu and other Linux distributions indefinitely. With Linux already a small overall percentile of Chrome’s user base, and 32-bit users amongst that percentage even smaller, something had to give at some point.

The build infrastructure used to package Google Chrome is tasked with making hundreds of binaries each day, and human effort is required to test those binaries for release.

“To provide the best experience for the most-used Linux versions, we will end support for Google Chrome on 32-bit Linux, Ubuntu Precise (12.04), and Debian 7 (wheezy) in early March, 2016,” says Chromium engineer Dirk Pranke.

32-bit ChromiumIs Not Affected

‘Chromium is unaffected by the change. ‘

Many Linux users run Chromium, the open-source basis of Chrome, and so won’t be affected by this change. Google Chrome and Chrome OS builds for 32-bit ARM are similarly unaffected.

For browsers built on Chromium, like Opera, it will be up to them as to whether they continue to offer builds for 32-bit users.

Google says it will ‘keep support for 32-bit build configurations on Linux to support building Chromium’, which  we’re told it will do so for ‘some time to come’.

Do you use Google Chrome on a 32-bit version of Linux? Will you switch to another browser? Perhaps you think this decision is logical. Whatever your view on this decision you can share it in the comments below. 

This post, Google Chrome Axes Support for ALL 32-bit Linux Distros, was written by Joey-Elijah Sneddon and first appeared on OMG! Ubuntu!.

Read More

darktable 2.0 released with printing support

Darktable, free RAW processing software for Linux and Mac, got a major update just in time for your festive season.

The most visible new feature is the print module that uses CUPS. Printing is completely color-managed, you can tweak positions of images on paper etc. All the basics are in place.

print module in darktable

The nice “perk” of this new feature is exporting to PDF in the export module.

The other important change is improved color management support. The darkroom mode now features handy toggles for softproofing and gamut check below the viewport (darktable uses a cyan color to fill out of gamut areas). Additionally, thumbnails are properly color-managed now.

Something I personally consider a major improvement in terms of getting darktable to work out of box nicely is that the viewport is finally automatically sized. No longer you need to go through the trial-and-error routine to set it up in the preferences dialog. It just works. Moreover, mipmap cache has been replaced with thumbnail cache which makes a huge difference. Everything is really a lot faster.

film grain added in darktable

If you care about losing your data (of course you do), darktable 2.0 finally supports deleting images to system trash (where available).

The port to Gtk+3 widget set is yet another major change that you might or might not care about much. It’s mostly to bring darktable up to date with recent changes in Gtk+ and simplify support for HiDPI displays (think Retina, 4K, 5K etc.)

The new version features just two additional image processing modules:

  • Color reconstruction attempts to restore useful data from overexposed areas in your photos.
  • Raw black/white point module is pretty much an internal feature that the team hopes you never ever touch (of course you will). It was a prerequisite step towards dual-ISO support and better denoising.

Other existing modules got all sort of tweaks and updates. Most notably, deflicker from Magic Lantern was added to the exposure module.

Additionally, the watermark module features a simple-text.svg template now, so that you could apply a configurable text line to your photos. Which means that with a frame plugin and two instances of watermark you can use darktable for the most despicable reason ever:

making a meme in darktable

The most important changes in Lua scripting is that script can add buttons, sliders, and other user interface widgets to the lighttable view. To, the team started a new repository for scripts on Github.

Finally, the usual part of every release: updates in the camera support:

  • Base curves for 8 more cameras by Canon, Olympus, Panasonic, and Sony.
  • White balance presets for 30 new cameras by Canons, Panasonic, Pentax, and Sony.
  • Noise profiles for 16 more cameras by Canon, Fujifilm, Nikon, Olympus, Panasonic, Pentax, and Sony.

For a more complete list of changes please refer to the release announcement. Robert Hutton also shot a nice video covering most important changes in this version of darktable:

LGW spoke to Johannes Hanika, Tobias Ellinghaus, Roman Lebedev, and Jeremy Rosen.

Changes in v2.0 could be summarized as one major new feature (printing) and lots of both under-the-hood and user interaction changes (Gtk+3 port, keyboard shortcuts etc.). All in all, it’s more of a gradual improvement of the existing features. Is this mostly because of the time and efforts that the Gtk+3 port took? Or would you say that you are now at the stage where the feature set is pretty much settled?

Tobias: That’s a tough question. The main reason was surely that the Gtk+3 port took some time. Secondly, the main motivation for most of us is scratching our itches, and I guess that most of the major ones are scratched by now. That doesn’t mean that we have no more ideas what we’d like to see changed or added, but at least most low-hanging fruits are picked, so everything new takes more time and effort than big changes done in the past.

Roman: The Gtk+3 port, as it seems, was the thing that got me initially involved with the project. On its own, just the port (i.e. rewriting all the necessary things, and making it compile and mostly be functional) did not took too long, no more than a week, and was finished even before previous release happened (v1.6 that is). But it was the stabilization work, i.e. fixing all those small things that are hard to notice, but are irritating and make bad user experience that took a while.

Johannes: As far as I’m concerned, yes, darktable is feature complete. The under-the-hood changes are also pretty far-reaching and another reason why we call it 2.0.0. The Gtk+3/GUI part is of course the most visible and the one you can most easily summarize.

Jeremy: I’d like to emphasis the “under the hood” part. We did rewrite all our cache management, and that’s a pretty complicated part of our infrastructure. I don’t think this cycle was slow, it’s just that most of it is infrastructure work needed if we want darktable’s visible feature set to grow in the future…

color balance adjusted in darktable

Darktable seems to be following the general industry trend where software for processing RAW images becomes self-sustained, with non-destructive local editing features such as a clone tool, as well as sophisticated selection and masking features. In the past, I’ve seen you talking about not trying to make a general-purpose image editor out of darktable, but these features just seem to crawl in no matter what, you are even considering adding a Liquify-like tool made by a contributor. Would you say that your project vision has substantially changed in the past? How would you define it now?

Tobias: I don’t see too many general image manipulation features creeping in. We have masks since a while, and the liquify/warping thing would be another one, but besides that I don’t see anything. There is also the question where to draw the line. Is everything besides global filters (exposure, levels, …) already a step towards a general purpose editor? Are masks the line being crossed? I don’t know for sure, but for me it’s mostly pushing individual pixels, working with layers, merging several images. We do none of those and I hope we never will.

Johannes: I think this is caused by how darktable is governed. It’s very much driven by the needs of individual developers, and we’re very open when it comes to accepting the work of motivated contributors. we have a large dev basis, so I guess it was just a matter of time until someone felt the need for this or that and just went ahead and implemented it. I guess you could say we weren’t consequent enough in rejecting patches, but so far I don’t think this strategy has hurt us much. To the contrary, it helps to foster a large community of motivated developers.

HDR merging does exist though, and there’s even a feature request to add manual/automatic alignment. And both duplication and configurable blending of processing modules are a lot like working with layers, even though the processing pipeline is fixed.

Tobias: Yes, but that doesn’t counter my point: Editing single pixels is out of context, general calculations like that fit.

Johannes: To give a very specific answer to this very specific question: the HDR merging works on pre-demosaic raw data (which is why we have it, it’s substantially simpler than/different to other tools except Wenzel’s hdrmerge which came after IIRC). So automatic alignment is not possible (or even manual for that matter).

exposure adjusted in darktable

Have you already defined any major milestones for future development?

Tobias: No. Version 2.0 had the predefined milestone “Gtk+3 port”, but that was an exception. Normally we start working on things we like, new features pile up and at some point we say “hey, that looks cool already, and we didn’t have a release for a while, let’s stabilize and get this to the users”. There is a lot less planning involved than many might think.

Roman: As Tobias said, there are rarely pre-defined milestones. It is more like, someone has some cool idea, or needs some functionality that is not there yet, and he has time to implement it.

Personally, I have been working on image operation for highlight reconstruction via inpainting. There are several of them already in darktable, but frankly, currently that is the one of important features that are still not completely handled by darktable.

There has been a lot of preparatory work under-the-hood over the last two releases, which now opened possibility for some interesting things, say native support for Magic Lantern’s Dual ISO, or new version of our profiled denoise image operation.

I’m also looking into adding yet another process() function to image operations, that would not use any intrinsic instructions, but OpenMP SIMD only, and thus, making darktable to not have any hard dependency on x86 processors, i.e. it could work on ARM64 too.

Jeremy: I would like to add the manipulation of actual image parameters to Lua, that is a big chunk of work. Apart from that it will mainly depend on what people do/want to do.

What kind of impact on users’ workflows do you think the adding of Lua scripting has done so far? What are the most interesting things you’ve seen people do with Lua scripting in darktable?

Tobias: Good question. We slowly added Lua support since 1.4, but only now we start to get to a point where more advanced features can be done. In the future I can see quite some fancy scripts being written that people can just use instead of everyone coding the same helpers over and over again. That’s also the motivation for our Lua scripts repository on GitHub. While there are some official scripts, i.e., mostly written and maintained by Jeremy and me, we want them to be seen as an extension to the Lua documentation, so that others can get ideas how to use our Lua API.

The results of that can be seen in the ‘contrib’ directory. The examples there range from background music for darktable’s slideshows to a hook that uses ‘mencoder’ to assemble timelapses. We hope to see many more contributions in the future.

Jeremy: Lua was added mainly for users that have a specific workflow that goes against the most common workflow. Darktable will follow the most common workflow, but Lua will allow other users to adapt DT to their specific need.

That being said, I agree with Tobias that Lua in 1.6 was still missing some bricks to make it really useful. Without the possibility to add widgets (buttons, sliders etc.) to darktable, it was impossible to make a script that was really useable without technical knowledge.

With the Lua repository and the possibility to find widgets, things should go crazy really fast. Did you know that you can remote-control darktable via d-bus by sending Lua commands?

white balance adjusted in darktable

In early days of darktable quite a few features (e.g. wavelet-based) came directly from papers published at SIGGRAPH etc. What’s your relationship with the academic world these days?

Tobias: We didn’t add many new image operations recently, and those that got added were mostly not that sophisticated that we had to take the ideas from papers. That doesn’t mean that our link to the academic world was dropped, Johannes is still working as a researcher in university, and when new papers come out we might think about implementing something new, too.

Johannes: Yes, as Tobias says. But then again graphics research is my profession, and darktable is for fun. No, seriously, the last few siggraphs didn’t have any papers that seemed a good fit for implementation in darktable to me.

Several years ago you switched to rawspeed library by Klaus Post from the Rawstudio project. Now it looks like darktable is the primary “user” of rawspeed, and your own Pedro Côrte-Real is 2nd most active contributor to the library. Doesn’t it feel at least a tiny bit weird? 😉

Tobias: I think it’s a great example of how open source software can benefit from each other. I’m not sure if that’s weird or just a bit funny.

How has your relationship with the Magic Lantern project been evolving, given the deflicker feature etc.?

Tobias: The deflicker code wasn’t so much contributed by the Magic Lantern folks but written by Roman with inspiration from how magic lantern does it. I don’t know if he used any code from them, maybe he can clarify. Apart from deflicker there are also plans to support their dual-iso feature natively.

Roman: The only direct contribution from Magic Lantern project was the highlight reconstruction algorithm that made it into v1.6. The deflicker was implemented by me, as it usually happens, after I needed a way to auto-expose lots of images, and found no way to do it. That being said, it uses exactly the same math as deflick.mo does.

Tobias: Even that was not taking code from them. Jo wrote it after talking with Alex at LGM.

Johannes: But it was most inspiring meeting those folks in person. And yes, I was a lazy ass implementing this dual-iso support natively in darktable ever since LGM.

Darktable seems to be doing pretty well without any kind of community funding which is all the rage these days. What do you think are the causes to that effect?

Tobias: Well, we’d need some legal entity that takes care of taxes. And to be honest, we don’t need that much money. Our server is sponsored by a nice guy and there are no other expenses. Instead we have been asking our users to donate to LGM for several years now and from what we can see that helped a lot.

As for why we have been doing so well, no idea. Maybe because we are doing what we want without caring if anyone would like it. To the best of our knowledge darktable has exactly 17 users (that number is measured with the scientific method of pulling it out of thin air), so whatever we do, we can lose at most those few. Nothing to worry about.


The new version of darktable is available as source code and a .dmg for Mac OS X. Builds for various Linux distributions have either already landed or are pending.

Read More

GIMP 2.9.2 Released, How About Features Trivia?

In a surge of long overdue updates the GIMP team made the first public release in the 2.9.x series. It’s completely GEGL-based, has 16/32-bit per channel editing and new tools. It’s also surprisingly stable enough even for the faint of heart.

Obligatory disclaimer: I’m currently affiliated with upstream GIMP project. Please keep that in mind when you think you stumbled upon biased opinion and thought you’d call LGW out.

One might expect a detailed review here, which totally makes sense, however writing two similar texts for both upstream GIMP project and LGW would seem unwise. So there: the news post at GIMP.org briefly covers most angles of this release, while this article focuses on features trivia and possible areas of contribution.

The GEGL port and HDR

Originally launched in 2000 by a couple of developers from Rhythm & Hues visual effects studio, the GEGL project didn’t have it easy. It took 7 years to get it to GIMP at all, then another 8 years to power all of GIMP.

So naturally, after years and years (and years) of waiting the very first thing people would be checking in GIMP 2.9.2 is this:

First and foremost, 64-bit is there mostly for show right now, although GIMP will open and export 64-bit FITS files, should you find any.

That said, you can use GIMP 2.9.2 to open a 32-bit float OpenEXR file, adjust color curves, apply filters, then overwrite that OpenEXR file or export it under a different name. Job done.

The same applies to PNG, TIFF, and PSD files: respective plugins have been updated to support 16/32-bit per channel data to make high bit depth support actually useful even for beta testers.

All retouching and color adjustment tools, as well as most, if not all plugins are functional in 16/32-bit modes. There’s also basic loading and exporting of OpenEXR files available (no layers, no fancy features from v2.0).

GIMP also provides several tonemapping operators via the GEGL tool, should you want to go back to low dynamic range imaging.

Mantiuk06 tonemapping operation

There are, however, at least two major features in GEGL that are not yet exposed in GIMP:

  • RGBE (.hdr) loading and exporting;
  • basic HDR merging from exposure stacks.

This is one of the areas where an interested developer could make a useful contribution at a low enough price in the time currency.

In particular, adding a GEGL-based HDR merge tool to GIMP should be easier now thanks to a widget for using multiple inputs to one GEGL operation (which would be exp-combine).

GEGL operations

Currently 57 GIMP plugins are listed as completely ported to become GEGL operations, and 27 more ports are listed as work in progress. That leaves 37 more plugins to port, so the majority of the work appears to be done.

Additionally, GEGL features over 50 original filters, although some of them are currently blacklisted, because they need to be completed. Also, some of the new operations were written to implement certain features in GIMP tools. E.g. the Distance Map operation is used by the Blend tool for the Shape Burst mode, and both matting operations (Global and Levin) are used by the Foreground Select tool to provide mask generation with subpixel precision (think hair and other thin objects).

Various new operations exposed in GIMP, like Exposure (located in the Colors menu) and High Pass (available via the GEGL tool), are quite handy in photography workflows.

Note that if you are used to “Mono” switch in the Channel Mixer dialog, this desaturation method is now available through a dedicated Mono Mixer operation (Colors->Desaturate submenu). It might take some getting used to.

Mono Mixer

It’s also worth mentioning that 41 of both ports and original GEGL operations have OpenCL versions, so they can run on a GPU.

And while immensely popular external G’MIC plugin is not going to become GEGL operation any time soon (most likely, ever), since recently it’s ready to be used in conjunction with GIMP 2.9.x in any precision mode.

There are some technical aspects about GIMP filters and GEGL operations in GIMP 2.9.x that you might want to know as well.

First of all, some plugins have only been ported to use GEGL buffers, while others have become full-blown GEGL operations. In terms of programming time, the former is far cheaper than the latter, so why go the extra mile, when GIMP 2.10 is long overdue, and time could be spent wiser?

Softglow

Porting plugins to use GEGL buffers simply means that a filter can operate on whatever image data you throw it at, be it 8bit integer or 32-bit per color channel floating point. Which is great, because e.g. Photoshop CS2 users who tried 32-bit mode quickly learnt they couldn’t do quite a lot, until at least CS4, released several years later.

The downside of this comparatively cheap approach is that in the future non-destructive GIMP these filters would be sad destructive remnants of the past. They would take bitmap data from a buffer node in the composition tree and overwrite it directly, so you would not be able to tweak their settings at a later time.

So the long-term goal is still to move as much as possible to GEGL. And that comes at a price.

First of all, you would have to rewrite the code in a slightly different manner. Then you would have to take an extra step and write some special UI in GIMP for newly created GEGL op. The reason?

While the GEGL tool skeleton is nice for operations with maybe half a dozen of settings (see the Softglow filter screenshot above), using something like automatically generated UI for e.g. Fractal Explorer would soon get you to lose your cool:

Old vs. new Fractal Explorer

The good news is that writing custom UIs is not particularly difficult, and there are examples to learn from, such as the Diffraction Patterns op:

Diffraction Patterns operation

As you can see, it looks like the former plugin with tabbed UI and it has all the benefits of being a GEGL operation, such as on-canvas preview, named presets, and, of course, being future-proof for non-destructive workflows.

FFmpeg support in GEGL

If you have already read the changelog for the two latest releases of GEGL, chances are that you are slightly puzzled about FFmpeg support. What would GEGL need it for? Well, there’s some history involved.

Øyvind Kolås started working on GEGL ca. 10 years ago by creating its smaller fork called gggl and using it for a video compositor/editor called Bauxite. That’s why GEGL has FFmpeg support in the first place.

Recently Øyvind was sponsored by The Grid to revive ff:load and ff:save operations. These ops drive the development of the iconographer project and add video capabilities to The Grid’s artificial intelligence based automatic website generator.

The FFmpeg-based loading and saving of frames could also come in handy for the GIMP Animation Package project, should it receive much needed revamp. At the very least, they would simplify loading frames from video files into GIMP.

New Tools

The new version has 6 new tools—2 stable, 4 experimental. Here’s some trivia you might want to know.

GIMP is typically referred to as a tool that falls behind Photoshop. Opinions of critics differ: some say it’s like Photoshop v5, others graciously upgrade it all the way to a CS2 equivalent.

If you’ve been following the project for a while, you probably know that, anecdotally, the Liquid Rescale plugin was made available a year ahead of Photoshop CS5 Extended. And you probably know that Resynthesizer made inpainting available in GIMP a decade before Content-Aware Fill made its way to Photoshop CS6:

But there’s more. One of the most interesting new features in GIMP 2.9.2 is the Warp Transform tool written by Michael Muré back in 2011 during Google Summer of Code 2011 program.

It’s the interactive on-canvas version of the venerable iWarp plugin that looked very much like a poor copy of Photoshop’s Liquify filter. Except it was introduced to GIMP in 1997, while Liquify first appeared in Photoshop 6, released in 2000.

Warp Transform reproduces all features of the original plugin, including animation via layers, and adds sorely missing Erase mode that’s designed to selectively retract some of the deformations you added. The mode isn’t yet functioning correctly, so you won’t restore original data to its original pixel crisp state, but there are a few more 2.9.x releases ahead to take care of that.

Unified Transform tool is a great example of how much an interested developer can do, if he/she is persistent.

Originally, merging Rotate, Scale, Shear, and Perspective tools into a single one was roughly scheduled for version 3.6. This would prove to be challenging, what with the Sun having exploded by the time and the Earth being a scorched piece of rock rushing through space, with a bunch of partying water bears on its back.

But Mikael Magnusson decided he’d give it a shot out of curiosity. When the team discovered that he had already done a good chunk of the work, he was invited to participate at Google Summer of Code 2012 program, where he completed this work.

Unfortunately, it’s also an example of how much the GEGL port delayed getting cool new features into the hands of benevolent, if slightly irritated masses.

Internal Search System

Over the years GIMP has amassed so many features that locating them can be a bit overwhelming for new users. One way to deal with this is to review the menu structure, plugin names and their tooltips in the menu etc., maybe cut most bizarre ones and move them into some sort of an ‘extras’ project.

Srihari Sriraman came up with a different solution: he implemented an internal search system. The system, accessible via Help->Search and Run a Command, reads names of menu items and their descriptions and tries to find a match for a keyword that you specified in the search window.

Searching action in GIMP

As you can see, it does find irrelevant messages, because some tooltips provide an overly technical explanation (unsharp mask uses blurring internally to sharpen, and the tooltip says so, hence the match). This could eventually lead to some search optimization of tooltips.

Color Management

The news post at gimp.org casually mentions completely rewritten color management plugin in GIMP. What it actually means is that Michael Natterer postponed the 2.9.2 release in April (originally planned to coincide with Libre Graphics Meeting 2015) and focused on rewriting the code for the next half a year.

The old color management plugin has been completely removed. Instead libgimpcolor, one of GIMP’s internal libraries, got new API for accessing ICC profile data, color space conversions etc.

Since GIMP reads and writes OpenEXR files now, it seems obvious that GIMP should support ACES via OpenColorIO, much like Blender and Krita. This has been only briefly discussed by the team so far, and the agreement is that a patch would be accepted for review. So someone needs to sit down and write the code.

What about CMYK?

Speaking of color, nearly every time there’s a new GIMP release, even if it’s just a minor bugfix update, someone asks, whether CMYK support was added. This topic is now covered in the new FAQ at gimp.org, but there’s one more tiny clarification to make.

Since autumn 2014, GEGL has an experimental (and thus not built by default) operation called Ink Simulator. It’s what one might call a prerequisite for implementing full CMYK support (actually, separation into an arbitrary amount of plates) in GIMP. While the team gives this task a low priority (see the FAQ for explanation), this operation is a good start for someone interested to work on CMYK in GIMP.

Digital Painting

Changes to the native brush engine in GIMP are minor in the 2.9.x series due to Alexia’s maternity leave. Even so, painting tools got Hardness and Force sliders, as well as the optional locking of brush size to zoom.

Somewhat unexpectedly, most other changes in the painting department stem indirectly from the GIMP Painter fork by sigtech. The team evaluated various improvements in the fork and reimplemented them in the upstream GIMP project.

Canvas rotation and flipping

Canvas rotation and horizontal flipping. Featuring artwork by Evelyne Schulz.

Interestingly, while most of those new features might look major to painters, they actually turned out to be a low-hanging fruit in terms of programming efforts. Most bits had already been in place, hence GIMP 2.9.2 features canvas rotation and flipping, as well as an automatically generated palette of recently used colors.

Another new feature is an experimental support for MyPaint Brush engine. This is another idea from the GIMP Painter fork. The implementation is cleaner in programming terms, but is quite incomplete and needs serious work before the new brush tool can be enabled by default.

MyPaint Brush tool

Some Takeaways For Casual Observers and Potential Contributors

As seen in recently released GIMP 2.9.2, the upcoming v2.10 is going to be a massive improvement with highlights such as:

  • high bit depth support (16/32-bit per channel);
  • on-canvas preview for filters;
  • OpenEXR support;
  • better transformation tools;
  • new digital painting features;
  • fully functional color management;
  • improved file formats support.

Much of what could be said about the development pace in the GIMP project has already been extensively covered in a recent editorial.

To reiterate, a lot of anticipated new features are blocked by the lack of GIMP 2.10 (complete GEGL port) and GIMP 3.0 (GTK+3 port) releases. There are not enough human resources to speed it up, and available developers are not crowdfundable due to existing work and family commitments.

However, for interested contributors there are ways to improve both GIMP and GEGL without getting frustrated by the lack of releases featuring their work. Some of them have been outlined above, here are a few more:

  • Create new apps that use GEGL (example: GNOME Photos).
  • Port more GIMP filters to GEGL or create entirely new GEGL operations (both would be almost immediately available to users).
  • Create OpenCL versions of GEGL operations.

All of these contributions will directly or indirectly improve GIMP.

With that—thanks for reading!

Read More

Morevna animation project launches new crowdfunding campaign

Three years after releasing a community-funded teaser, Morevna project returns to crowdfunding with a revamped story line and entirely new visuals.

Morevna project is a Russia-based open animation project that has been driving the development of 2D vector animation package Synfig for the past several years.

The story is loosely based on a Russian fairy-tale that features a kick-ass female protagonist, an evil wizard, crazy horse chases, getting physical over a woman, dismembering and resurrecting the male protagonist, an epic final battle—and it’s all inevitably twisted around a damsel in distress situation. Your average bedtime story for the kiddies, really.

The updated plot is taking place in the future, where robot overlords are just as bad as the wizards of old (with the exception of womanizing, for obvious reasons), and distressed damsels handle samurai swords like nobody’s business. Ouch.

Both Morevna and Synfig have the same project leader, Konstantin Dmitriev. Both projects have benefitted from crowdfunding in the past, especially Synfig. But with a new concept artist and, in fact, a new team, it was time for Morevna to get to the next stage.

Last week, Konstantin launched a new campaign to fund the dubbing of the first episode in the first ever Morevna series. The work would be done by Reanimedia Ltd., a Moscow-based dubbing studio that specializes on anime movies and has a bit of a cult following due to high quality of the localization they provide.

And here’s an unexpected turn of events: the dubbing will be in Russian only. Moreover, the campaign was launched on Planeta.ru that makes it somewhat difficult for non-Russian users to contribute. So LGW had no choice but to interview Konstantin.

(Disclaimer: the interview was originally published a week ago in Russian. This is its shorter version.)

The promotional video left some questions unaswered. Like a very basic one: how many episodes are planned?

So far we are planning 8 episodes.

You are deliberately focusing on the Russian audience instead of a wider international one. Why?

It’s our primary goal to create an anime movie in Russian. It only stands to reason that the campaign would be interesting mostly to the Russian community.

Will there be another campaign to make the series available in English?

No, we are taking an entirely different approach here. We’d have to search for the right team and the right studio, so instead we’ll release a fan dubber kit—basically, original video track and stem-exported audio records of music, sounds effects, and voiceovers, as well as the dialogs’ text in English.

Anyone then would be able to create his/her own dubbing and release localized video. It’s all to be released under terms of Creative Commons license, after all.

Does it bother you at all that the quality of some fan dubs could be subpar? Or is it just the reality that you choose to accept?

It’s really not our responsibility. We’ll just publish the fan dubber kit and see how it goes. We are really curious about how this will turn out.

We could launch some sort of a competition, but it’s something I really hate to do. It’s hard enough to tell someone his/her work wasn’t good enough even when you see the person did his/her best. So we take the Creative Commons remix way.

Planeta.ru which you chose for the crowdfunding platform isn’t even available in English. Is there a way for people to support your project somehow?

Sure, we are on Patreon.

The visuals have considerably changed in comparison to the demo from three years ago. What made the major impact?

When we finished the demo, we realized that our resources depleted. We weren’t happy with the outcome. We spent too much time doing technical things like vectorization and too little time being creative.

The way things were going, we couldn’t possibly succeed completing all of the movie. So we needed a new approach. A way to keep the visuals enjoyable while relying on technology that we could realistically handle.

Another major factor is the arrival of Anastasia Majzhegisheva, our new art director. She’s only 16 years old, but she’s very talented and she gets Japanese animation.

Have there been any other changes in the team?

Nikolai Mamashev, who was one of the major contributors to the demo, is still part of the team, but now he mostly does concept art and he’s extremely busy in commercial projects.

At certain production stages, like colouring, we started getting kids from school involved, to mutual benefit.

How much has your workflow and toolchain changed?

A lot. It’s now more of a cutout animation. We still use elements of frame-based animation, but we don’t do any morphing whatsoever.

It’s a deliberate change we made after releasing the demo three years ago, and we significantly improved Synfig in that respect. The software now has skeletal animation which also greatly simplifies our workflow.

Basic sound support in Synfig is beneficial too, although, frankly, it could have been better.

As for digital painting, it’s Krita all the way now. We barely use anything else.

More than that, we rewrote from scratch Remake, our smart rendering manager. The new project is called RenderChan. It’s far more capable and supports free/libre Afanasy renderfarm.

We still use Blender VSE for video editing, but that’s pretty much it. We have just a few 3D elements in shots.

Production pipeline is still a work in progress though. We hope to be able to switch to Cobra soon—it’s a new rendering engine in Synfig. That means we really, really need to make Cobra usable ASAP.

Have you already succumbed to the international Natron craze? 🙂

Not really, no. As a matter of fact, I haven’t even had a chance to try it. We do all compositing inside Synfig. For now, it’s more than enough.

Read More

Afanasy Render Farm Manager Gets Natron Support

Timur Hairulin released an update of his free/libre CGRU render farm management tools.

The newly arrived version of CGRU features support for Natron, free/libre VFX compositing and animation software, and Fusion, one of its proprietary counterparts, by Blackmagic Design.

Timur has great hopes for Natron:

I still haven’t used it in production yet, because it needs to become more stable first. Once it’s done, getting an artist to use Natron should be easy. After all, it looks and behaves a lot like Nuke. Besides, it has a great Python API. For instance, I don’t need to create gizmo’s on TCL like in Nuke.

Once you install CGRU, you will find CGRU’s Natron plugins in the cgru/plugins/natron folder. You should add this path to the NATRON_PLUGIN_PATH environment variable. This will make Afanasy node available in Natron. Further documentation is available at project’s website.

Support for Fusion was added by Mikhail Korovyansky. He tested it on v7, but v8 should be supported as well.

Additionally, Keeper now allows quickly changing local render user name, and rules now allow player linking to the current frame.

Given already existing support for Blender in CGRU, getting a complete libre-based studio solution should be closer now.

CGRU 2.0.7 with Natron and Fusion support are available for downloading for both Linux and Mac OS X users.

Read More

SANE update brings support for over 300 scanners and MFUs on Linux

SANE is not the most often updated pack of drivers and associated software around, but when they release, they do deliver.

Newly released SANE backends v1.0.25 features support for over 300 new scanners and multifunction units, quite of few which have been introduced in the past two years since the last release of SANE.

Relevant changes boil down to improvements in a variety of existing drivers (Canon, Fujitsu, Genesys, Kodak, and more) and arrival of new drivers: epsonds (Epson DS, PX and WF series), pieusb (PIE and Reflecta film/slide scanners). The support status page hasn’t been updated to reflect the changes yet.

The scanimage tool finally got support for saving to JPG and PNG (it only saved to PNM and TIFF beforehand).

The release also features a workaround by Allan Noah for buggy USB3/XHCI support on Linux. This should prevent you from “dancing on your left leg while sacrificing a goat” to launch scanning on newer Linux systems.

Expect an update in your Linux distribution of choice or grab source code and DIY.

Read More

3D printing support in CUPS demystified

Last week Apple released a new version of CUPS, the default printing system on UNIX and Linux, with what was called “basic support for 3D printers” by pretty much all media, with no details whatsoever. This has already caused some confusion, so we spoke to Michael Sweet and a few other stakeholders about CUPS, the IEEE-PSTO Printer Working Group, and the 3D initiative.

What’s the scope?

Most confusion was caused by the lack of understanding or, rather, the lack of explanation of what CUPS has to do with 3D printing, and how far the PWG’s 3D initiative is supposed to go. This question can easily be answered by the slides from the first birds-of-feather face-to-face meeting almost a year ago.

Essentially, it boils down to these few points:

  • networked 3D printers provide little or no feedback over the network;
  • there is no single standardized network protocol for them;
  • there is no open file format to handle most/all state-of-the-art 3D printing capabilities.

So the idea is that users should be able to:

  • easily access a networked printer that has the required materials, and submit a print job;
  • print multi-material objects in a single-material 3D printer, which means the printer gets instructions to stop at a certain layer, let the user change materials, and then proceed further;
  • remotely track printing progress;
  • receive notifications about clogged extruder, filament feed jam, running out of PLA, etc.

As you can see, these requirements are pretty much what people are already used to when dealing with common networked 2D printers in offices.

To aid that, since their first get-together in August 2014, members of the birds-of-feather meetings have been working on a white paper that defines an extension to the Internet Printing Protocol to add support for additive manufacturing devices. The whitepaper is focused, but not limited to fused deposition modeling and takes into consideration cloud-based printing.

Suggested extensions to IPP include various new attributes like material name, type, and color, print layer thickness, current extruder temperature, various printer description attributes, and more.

While the whitepaper is getting increasingly detailed with each revision, in a conversation with LGW, Ira McDonald (High North, a PWG member, PWG secretary and IPP WG co-chair) stressed:

This is NOT a standards development project in PWG yet (and may never be). We do have several 3D printer manufacturers and major software vendors who have contributed ideas and privately expressed support. But we’re not at the consumer promotion stage yet. We’re engaging 3D Printing vendors and other standards consortia to gauge interest at present.

Currently CUPS is only used as a testbed for the whitepaper. Michael Sweet (Apple, CUPS, PWG Chair and IPP WG secretary) explains:

CUPS 2.1 added a “3D printer” capability bit to allow 2D and 3D print queues to co-exist on a computer. There is no explicit, out-of-the-box support for 3D printers there, but we’ll be able to experiment and prototype things like the white paper to see what works without seeing 3D printers in the LibreOffice print dialog, for example.

So when you read about support for 3D printers in CUPS elsewhere in the news, you should make a mental note of using a lot of quote marks around the word “support”.

Exploring file formats standardization

The whitepaper only vaguely touches the topic of an Object Definition Language to be used and cautiously suggests AMF file format (ISO/ASTM 52915) developed by ASTM Committee F42 on Additive Manufacturing Technologies, comprised of pioneers of additive manufacturing such as David K. Leigh and representing businesses and institutions such as Met-L-Flo Inc., Harvest Technologies (Stratasys), NIST etc.

AMF has certain benefits over some older file formats common in manufacturing: multiple materials support, curved surfaces, etc. Unfortunately, the specification is not freely available which has hampered its adoption.

Additionally the participants of the BOF meetings evaluated other options such as STL, DAE (COLLADA), and, more interestingly, 3MF — a file format designed by Microsoft and promoted by 3MF Consortium that brings together companies like HP, Autodesk, netfabb, Shapeways, Siemens, SLM Solutions, Materialise, and Stratasys.

Earlier this year, Michael Sweet reviewed the v1.0 specification of the 3MF file format. He disagreed with some design decisions:

  • the ZIP container makes streaming production almost impossible and adds space and CPU overhead;
  • the job ticket is embedded into document data (and shouldn’t be);
  • limited material support, the only attribute is sRGB color;
  • all colors are sRGB with 8 bit per component precision, CIE- and ICC-based DeviceN color is missing;
  • no way to specify interior fill material or support material.

Even though the Consortium isn’t particularly open, Michael says he’s been in conversation with both the HP and Microsoft reps to the 3MF Consortium:

Based on the responses I’ve received thus far, I think we’ll end up in a happy place for all parties. Also, some of the issues are basically unknowns at this point: can an embedded controller efficiently access the data in the 3MF ZIP container, will the open source 3D toolchains support it, etc. Those are questions that can only be answered by prototyping and getting the corresponding developers on board.

So there’s still work to do on this front.

For developers, the 3MF Consortium provides an open source C++ library called lib3mf, available under what appears to be the BSD 2-clause license.

Who are the stakeholders in the initiative?

First of all, to give you a better idea, the Printer Working Group is a program of the IEEE-ISTO that manages industry standards groups under the IEEE umbrella.

According to Michael Sweet, several PWG members had expressed interest in a 3D track during face-to-face meetings and offline, so the steering committee agreed to schedule BOFs at subsequent face-to-face meetings, starting with the August 2014 one.

Mixed Tray in Stratasys Connex1 3D printer

Mixed Tray in Stratasys Connex1 3D printer

This is where it gets interesting. None of the current Printing Work Group members are, strictly speaking, core 3D companies. Here’s what it looks like:

  • HP is in partnership with Stratasys and Autodesk (using their Spark platform) and planning to start selling their own Multi Jet Fusion units in 2016.
  • Canon and Fuji Xerox already resell CubePro and ProJet printers made by 3D Systems, and Kyocera got into a partnership with 3D Systems in March 2015 for the very same reason.
  • Brother was last heard (in early 2014) reconsidering to enter the 3D printing market some time in the future.
  • Epson expressed (also in early 2014) the lack of interest in producing consumer-level units and wanted to make industrial 3D printers within next several years.
  • Xerox has been in business with 3D Systems at least since 2013, when they sold part of their solid ink engineering/development team to 3D Systems “to leverage both companies’ 3D printing capabilities to accelerate growth and cement leadership positions”. Moreover, in January 2015, Xerox filed a patent for Printing Three-Dimensional Objects on a Rotating Surface.
  • Ricoh made a loud announcement in September 2014 about jumping into 3D printing business and leading the market, but so far they are simply reselling Leapfrog 3D Printers in Europe and providing printing services in two fablabs in Japan.
  • Samsung, as some sources assert, isn’t planning to enter the market until ca. 2024, however in September 2014, they filed a patent that covers a new proprietary multicolor 3D printing process, and in 2015 they partnered with 3D Systems for a few trade shows.
  • Intel has no related products, but they do support Project Daniel which uses 3D printing to make prosthetic arms for children of war in South Sudan.
  • Most other companies are in the consulting and software/network solutions development business.

Neither of the market founding companies like Stratasys and 3D systems (both launched in late 1980s) are in the PWG. However, since this project is still at a very early stage of evolution, we probably should not expect this to change soon.

Even so, reportedly there’s some off list activity. When asked about the interest of 3D printer vendors in standardization, Michael Sweet replied:

My impression is that while they are interested they are also just starting to look at supporting networking in future products — still a bit early yet for most. Both Ultimaker and Microsoft have provided technical feedback/content that has been incorporated into the white paper, and I’ve been promised more feedback from half a dozen more companies, many of whom actually make printers and software tools for 3D Printers.

The 3D BOF participants have been reaching out to vendors since late 2014, but there are still more companies to talk to. LGW contacted Aleph Objects, Inc., the makers of FSF-approved LulzBot 3D printers. In a conversation, Harris Kenny stated that the team at Aleph Objects hadn’t heard of the PWG 3D initiative before, but is interested in following its progress.

LulzBot TAZ 3D printer

LulzBot TAZ 3D printer

What gives?

While 3D printers are slowly getting common in companies that need rapid prototyping services and even creeping into households of tinkerers, we are not likely to see them as common as 2D printers any time soon.

A recent study by BCC Research suggests that the global market for 3D printing will grow from $3.8 in 2014 to nearly $15.2 billion in 2019. At the same time, another recent research by Smithers Pira estimates the global printing market to top $980 billion by 2018. There’s a deep black abyss between these two numbers.

The good news is that by the time anyone, for good or bad reason, can own a 3D printer, we might already have all the software bits and protocols in place to make it just work.


Feature image is Sculpture #10 by Pyromaniac.

Read More

Red Hat releases free/libre Overpass font family

Red Hat announced the release of Overpass, their own highway gothic font family designed by Delve Fonts. Overpass is available under terms of SIL Open Font License.

In 2011, the company commissioned the project to Delve Withrington. The idea was to reuse Standard Alphabets For Traffic Control Devices and adapt it to screen resolution limits. Originally Delve and his team created just Regular and Bold upright faces. However, in 2014, Red Hat returned to Delve and his team for more weights and faces: under Delve’s direction, Thomas Jockin drew the Light weight, and Dave Bailey assisted with drawing the italics.

The first public version of the font family is available in Extra Light, Light, Regular, and Bold weights, in both upright and italic versions. So far Overpass has complete Extended Latin coverage and support for a variety of OpenType features such as fractions, ligatures, localized forms etc.

Overpass fonts specimen

You can download Overpass as TTF files, as well as WOFF, SVG, and EOT. If you are willing to tweak/enhance the font family, source VFB (FontLab Studio) are available on GitHub (it would be nice to have UFO there as well).

We spoke with Andy Fitzsimon, a brand manager at Red Hat, about the history of this project and further plans.

Overpass is based on a typeface standard for spatial navigation. Why did you pick it for user interfaces  and internal websites? Is it because it’s something people are already accustomed to?

In the earlier days of the Red Hat brand, a way-finding typeface was chosen for various reasons.  One quality that I’ve always liked about Highway Gothic, is that it has global cultural association with a common good.

Also, with such prominent characteristics on many glyphs, particularly the angle on many ascenders, It’s a self-governing system to write with.  Writing needs to be informative, short and to the point to be visually appealing. That’s the type of writing Red Hat wants to do: concise, helpful, and standards-born.

How and why did you choose Delve Fonts to commission the project to?

The Overpass story started with a software distribution branding need. Highway Gothic had the brand look Red Hat was using but not all the options a typographer expects (or any high quality, open source font files).

Red Hat material was already using a commercial digitization of highway gothic that had all the bells and whistles designers love (various weights, condensed text, italics etc). But using that font meant designs had to be rendered in precomposed images, in print and other graphics before being used.

It didn’t make sense to buy a commercial font license for every customer and every community member who touches our software. So branded strings of text had to be baked into images by trained designers with a license. You can see how that would be frustrating if we tried to typographically brand ever-changing UI elements.

The commercial digitisation of Highway Gothic Red Hat was designing with previously was not available as a webfont and quite honestly is still not suited as one, due to the print-focused detailed node coordinates — meaning a larger file size, than is common with similar webfonts.

At first, regular and bold variants of Overpass were commissioned by our engineering department for use in desktop and web UI’s to retain the Red Hat corporate look.

Andy Fitzsimon

Andy Fitzsimon

One thing to note: the Overpass regular variant is more of a bold and Overpass bold is more of an extra-bold. which is fine for nav bars and buttons that need to be …bold.   But when I came on board to the Brand team, my first request from my boss was that we take over the project and expand the series into a light (regular looking) weight for use on the web so that our digital content was a little less “shouty”.

I reached out to Delve as the designer of regular and bold to continue the project and he did a tremendous job!

We put the light weight through it’s paces on redhat.com and even used it as the default weight when we made presentations using reveal.js and other websites.

Since that expansion was a success, we moved onto expanding the series with true italics for use in citations and testimonials. We also added extra light and it’s italic equivalent so that we could get more conversational when using large font sizes.

Now we’re effectively at our first stable release for the entire family — and we are pretty happy to use Overpass as-is for a while.

We chose to continue to work with Delve Fonts for the entirety of the project because that’s our working style. We know we’re lucky when we have direct contact with a creative expert. Big agencies don’t offer the same kind of access and quick collaboration that we’ve enjoyed when working with Delve Withrington and his team.

Delve Withrington

Delve Withrington

Currently Overpass has extended Latin coverage. Do you intend to get Delve et al. to add Cyrillics, Arabic etc.?

We haven’t discussed Cyrillic, Arabic , Indic, Korean, Japanese or Chinese expansions of Overpass yet,  but the repo is on the project page and we’re more than happy to accept quality commits from interested designers in the community ;-).

Overpass fonts specimen

Aside from Korea, Japan, and China, we tend to do business using the Latin alphabet. So sponsoring those expansions may be a while off. I personally can’t QA other character sets either. For Red Hat, –  for now; pairing the weights of Overpass with other quality open source fonts like Google’s Noto Sans series is enough for us to get by.

What kind of further improvements is Red Hat willing to invest in?

Eventually, we may expand it to introduce a black weight and/or two monospace variants so that code snippets and command line rules can have a Red Hat look.

What would be examples of Red Hat software titles where Overpass was used for branding?

Today, all our software with a web UI uses overpass to express the Red Hat brand. Our customer portal, corporate website, presentations and staff desktops all make use of the font family to do business.

Is Red Hat planning to continue using Overpass in its own branded products now that Overpass is freely available for everyone to use?

As we harden upstream projects into official Red Hat products, we’re going to use Overpass more and more to identify the alignment of our brand to what we make. Our commercial competitors have their own typographic languages. So we’re not worried about confusing the marketplace when it comes to enterprise software.

Overpass has been open source from the beginning, from the stencils of the SAFTCD to the font files you see today. We think that speaks volumes about Red Hat as a company.

The great thing about our corporate font being open source, is that we get to watch it grow beyond the walls of our business.  Designers will use it for unique and wonderful purposes, some shocking to trained typographers – and that’s okay.  It’s a tool for everyone.

Read More

ArgyllCMS 1.8.0 released with support for SwatchMate Cube colorimeter

Graeme Gill released a major update of ArgyllCMS with newly added support for two color measurement devices from opposite ends of price and quality spectrum.

The first supported instrument is SwatchMate Cube, a little fancy colorimeter you can carry around to pick a color swatch from wherever you want, then review the acquired palette on your mobile device (iOS, Android), paste to your Photoshop project etc.

SwatchMate Cube

Cube was successfully crowdfunded a year and a half ago on Kickstarter and caused quite a bit of media excitement as if it was the first portable device ever to pick colors from physical objects (it wasn’t).

Graeme got a Cube mainly for two reasons: because it was made in Melbourne where he lives, but also to see, how this entry-level device (ca. $180 USD) stacks up against more expensive and more commonly used instruments like X-Rite ColorMunki. He ended up writing a two-part article where he explained why and how much exactly readouts by Cube are hit and miss (especially for glossy surfaces), and how the device could be further improved.

The other newly supported device is EX1 by a German company called Image Engineering. EX1 is a spectrometer for measuring light sources. At 2.800,00€ it’s not exactly something you would throw some spare cash, but rather something you get to ensure the highest color fidelity in the professional environment.

Image Engineering EX1

Other changes include:

  • support for Television Lighting Consistency Index (EBU TLCI-2012 Qa) in spotread and specplot apps’ output;
  • support for adding R9 value to CRI value in spotread and specplot apps’ output;
  • various bugfixes, library dependencies updates etc.

For a complete list of changes have a look at the website. In addition to source code, builds are available for Linux, Windows, and OS X.

Graeme also updated his commercial ArgyllPRO ColorMeter app for Android. The new version features pretty much all improvements from the new ArgyllCMS release. It also receives readouts from Cube via Bluetooth Low Energy (USB is available too) and supports using the ChromeCast HDMI receiver as a Video Test Patch Generator. As usual, a demo version of the app is available

Read More