Tag Archives: Software

Document Liberation Project announces initial QuarkXPress support

The Document Liberation Project (DLP) announced the first release of libqxp, a library for reading QuarkXPress 3.3—4.1 documents. And this is one hell of a trip down the memory lane.

The initiative is a perfect fit for the project’s agenda to implement support for as many legacy file formats as possible (see our earlier interview with Fridrich Strba et al.), although the timing is a bit of a puzzle.

History lessons

QuakXPress was once the king of desktop publishing, with reported 95% of the market share at its highest point. But corporate greed, overconfidence, and lack of vision pretty much killed it in early 2000s, and Adobe InDesign nailed its coffin.

A typical comment to the Ars article (linked above) on the subject looks like this:

We hated Quark, the program and the company. But of course we used it because it was ubiquitous. InDesign 1.0 wasn’t great, but we were so desperate to move away from Quark that we slowly converted.

From many discussions on the web regarding Quark and Adobe it looks like QXP users mostly got their closure in 2003—2004, when Adobe’s Creative Suite 2 arrived and settled in, although some sticked with Quark’s software through v5 and v6.

Ever since Adobe introduced subscription-based model in 2013, there’s a somewhat popular notion that Adobe is the new Quark and it’s on the road to failure. However, after initial setback in 2013 and 2014, the company’s financials have been steadily growing, in terms of both revenue, net profit, and net income. And since introduction of Creative Cloud in May 2013, Adobe’s stock price is up by ca. 230%. So it looks like they need to try harder to fail.

Although Quark has been trying to bring back the former market share by any means deemed necessary, they haven’t been very successful. The company eventually refocused on automating content creation, management, publishing, and delivery. There are very few businesses around that still run once popular QuarkXPress, let alone the versions from 15—20 years ago which DLP focused on. Which brings us back to the actual topic at hand.

What’s in libqxp 0.0.0

The newly released first version of the library is the result of several months of work by Aleksas Pantechovskis, a student from Lithuania, who participated in the Google Summer of Code program this year (again).

Aleksas already had good track record with the Document Liberation Project. Last year, he wrote libzmf, a library for importing Zoner Callisto/Draw v4 and v5 documents.

In this initial release the libqxp library reads:

  • pages and facing pages;
  • boxes (rectangles, ellipses, Bezier);
  • lines, Bezier curves;
  • text objects, including linked text boxes and text on path;
  • font, font faces, size, alignment, paragraph rules, leading, tabs, underline, outline, shadow, subscript, superscript, caps etc.;
  • colors (including shades), gradients (linear, radial, rectangular);
  • line/frame color, width, line caps and corners, arrows, dashes;
  • object groups;
  • rotation.

Some rather important features like custom kerning and tracking aren’t yet supported, because OpenDocument file format doesn’t support those. But that’s not much of an issue, according to Aleksas:

librevenge is just interfaces, so if there is another output generation lib instead of libodfgen for format that supports them, then it can use any attributes passed to it.

One big missing part in this release is support for image objects, because, Aleksas says, the picture format seems to be quite complicated.

Development of libqxp sits on top of reverse-engineering work started by Valek Filippov in OLE Toy in 2013 and continued by David Tardon and Aleksas in February 2017. Although libqxp sticks to ancient versions of QuarkXPress for now, OLE Toy can parse some of the data in QXP v6 and v8 (it’s encrypted since v5), so this might change in the future.

LibreOffice has already been patched to open QXP files, this feature will be available in v6.0 (expected in early 2018). The library itself ships with the usual SVG converter which you are likely to find of limited use. Also, if all you need is extracting text, there’s a perfectly sensible qxp2text converter as well.

Support in Scribus

One would rightfully expect Scribus to be the primary beneficiary from libqxp. But here is some background info.

First of all, the history between Quark and Scribus is rather hairy.

Initially, Scribus was pretty much modeled after QuarkXPress, and the two projects still share some similarities. Early in the history of Scribus, it made a lot of sense to introduce support for QXP files. Users got mad with Quark’s continuous quirks and bad user support, they would jump ship at the very next opportunity.

Paul Johnson, former Scribus contributor, actually started working on support for QXP files in 2004. But after he had posted to a public mailing list about his progress, he reportedly received a cease and desist letter from Quark.

Scribus was nowhere near its current fame at the time, and even now it would not be able to handle legal expenses (save for a theoretical FSF intervention). Back then Paul just stopped working on that project.

Quark didn’t quit monitoring Scribus though and continued tracking the progress of the project to the point where developers jokingly discussed blocking Quark’s IP addresses range from accessing Scribus’s source code repository (they reportedly had logs of that). Eventually Quark turned their attention towards more pressing matters like losing their market share to Adobe.

Today, much like LibreOffice, Scribus supports both ubiquitous file formats like IDML and bizarre ones like those by Calamus and Viva Designer. It even has support for Quark’s XTG files. Getting a QXP importer would also perfectly fit Scribus’s narrative.

The team is well aware of the libqxp project, they already have experience writing librevenge-based importers for Corel DRAW, Microsoft Publisher, Macromedia FreeHand etc. So it’s likely just a matter of time till they introduce QuarkXPress importer.

Is there any closure left to get?

Read More

Inkscape hackfest planned for late June in Paris

Following productive hackfests in 2015 and 2016, the Inkscape team is meeting in Paris later this month for another hackfest. The event is taking place on June 27th through July 1st inside Paris’s modern science museum, Cité des sciences et de l’industrie.

(Not quite) coincidentally, the venue is exactly where in 2008 part of the original documentation team met for the first time to work on the official user manual.

So far the hackfest agenda seems to cover many topics from the official roadmap for the next major update of Inkscape: GTK+3 port, coordinate system flip, making C++11 compiler a requirement, splitting less-maintained extensions into an extra package, improving performance. Which is another reminder that should the team stick to the plan, they will need all the help they can get to prepare the next release in a sensible amount of time.

The attending team members are core team developers like Tavmjong Bah, Martin Owens, and Jabier Arraiza, as well as contributors like C Rogers, Cédric Gemy, and Elisa de Castro Guerra. Apart from programming sessions there’s a community meet-up planned for Saturday, July 1st.

The team is currently revamping the project’s infrastructure. Most recently they moved to Gitlab for source code hosting and bug tracking, marking a departure from Canonical’s Launchpad and Bazaar.

Read More

Krita To Kickstart New Text And Vector Tools

Krita Foundation announced their third Kickstarter project to fund development of new text and vector tools. With the proposed features, the team aims to improve the user experience for, among others, comic book and webcomic artists.

Essentially, the team will ditch the Text tool inherited from Calligra Suite and create an easier-to-use UI for managing text and its styling, improve RTL and complex scripts support (think CJK, Devanagari), add text on path editing, non-destructive bending and distortion of text items etc.

Additionally, they will completely switch to SVG as an internal storage format for vector graphics and improve usability of related editing tools.

There are also 24 stretch goals: from composition guides to reference image docker improvements to LUT baking. In all likeliness we are going to see at least some of the stretch goals done: it was the case for both past Kickstarter campaigns, and after the first two days this new campaign is already ca. 30% funded.

As usual, LGW asked project leader Boudewijn Rempt some technical questions about the development plans within the campaign.

Given the focus on text and vector tools, how many bits of Calligra Suite does Krita still share with the original project?

There is nothing shared anymore: the libraries that we used to share have been forked, so Calligra and Krita have separate and, by now, very different versions of those libraries. That was a really tough decision, but in the end we all realized that office and art applications are just too different.

So, we’ll probably drop all the OpenDocument loading and saving code in favor of SVG, with just an OpenDocument to SVG converter for compatibility with old KRA files.

We’ll implement a completely new text tool and drop the old text tools and its libraries. As for the vector tools, we’ll keep most of that code, since it is already half-ported to SVG, but we’ll rework the tools to work better in the context of Krita.

How far do you think Krita should go in terms of vector tools? I’m guessing, you wouldn’t want duplicating Karbon/Inkscape. But importing/exporting (EPS, AI, PDF, CDR etc.), boolean operations on paths, masks and clipping paths, groups, and suchlike?

For import/export, only SVG. And the functionality we want to implement first is what’s really important for artists: it must support the main thing, the raster art. So, things like vector based speech balloons for comics, or decorative borders for trading cards or some kinds of effects. Boolean ops on paths are really import for comic book frames, for instance.

Regarding text direction flow and OpenType features: how much do Qt and Harfbuzz provide for Krita already, and how much (and what exactly) do you need to write from scratch?

Qt’s text layout is a bit limited, it doesn’t do top-to-bottom for Japanese, for instance. So likely we’ll have to write our own layout engine, but we’ll be using harfbuzz for the glyph shaper.

Do you think it’s faster/easier to write and maintain your own engine than to patch Qt?

Well, they serve different purposes: Qt’s layout engine is general purpose and mostly meant for things like the text editor widget or QML labels. We want things like automatic semi-random font substitution that places glyps from different fonts so we can have a better imitation of hand-lettered text, for instance. How far we’ll be able to this is a bit of an adventure!

Some specifics of the proposed implementation make it look like you would slightly extend SVG. Is that correct?

Well, first we’ll look at what SVG2 proposes and see if that’s enough, then we’ll check what Inkscape is doing, and if we still need more flexibility, we’ll start working on extending SVG with our own namespace.

For vectors, I don’t think that will be necessary, but it might be necessary for text. If the kickstarter gets funded, I suspect I’ll be mailing Tavmjong Bah a lot!

Stretch goals cover all aspects of Krita: composition, game art, deep painting, general workflow improvements. How did you compile the list?

This January, we had a sprint in Deventer with some developers and some artists (Dmitry, me, beelzy, wolthera), where we went through all the wish bugs and feature requests and classified them. That gave us a big list of wishes of stretch goal size. Then later on, Timothée, Wolthera, Irina, and me sat down and compiled a list that felt balanced: some things that almost made it last years, some new things, bigger things, smaller things, something for every user.

One of the stretch goals is audio import for animation sequences. How far are you willing to go there? Just the basics, or do you see things like lipsync happen in the future?

Just the basics: we discussed this with the animators in our community, and lipsyncing just isn’t that much of a priority for them. It’s more having the music and the movement next to each other.

But that suggests multiple audio objects on the timeline, or would it be just a single track preprocessed in something like Ardour?

For now, a single track!

Read More

darktable 2.0 released with printing support

Darktable, free RAW processing software for Linux and Mac, got a major update just in time for your festive season.

The most visible new feature is the print module that uses CUPS. Printing is completely color-managed, you can tweak positions of images on paper etc. All the basics are in place.

print module in darktable

The nice “perk” of this new feature is exporting to PDF in the export module.

The other important change is improved color management support. The darkroom mode now features handy toggles for softproofing and gamut check below the viewport (darktable uses a cyan color to fill out of gamut areas). Additionally, thumbnails are properly color-managed now.

Something I personally consider a major improvement in terms of getting darktable to work out of box nicely is that the viewport is finally automatically sized. No longer you need to go through the trial-and-error routine to set it up in the preferences dialog. It just works. Moreover, mipmap cache has been replaced with thumbnail cache which makes a huge difference. Everything is really a lot faster.

film grain added in darktable

If you care about losing your data (of course you do), darktable 2.0 finally supports deleting images to system trash (where available).

The port to Gtk+3 widget set is yet another major change that you might or might not care about much. It’s mostly to bring darktable up to date with recent changes in Gtk+ and simplify support for HiDPI displays (think Retina, 4K, 5K etc.)

The new version features just two additional image processing modules:

  • Color reconstruction attempts to restore useful data from overexposed areas in your photos.
  • Raw black/white point module is pretty much an internal feature that the team hopes you never ever touch (of course you will). It was a prerequisite step towards dual-ISO support and better denoising.

Other existing modules got all sort of tweaks and updates. Most notably, deflicker from Magic Lantern was added to the exposure module.

Additionally, the watermark module features a simple-text.svg template now, so that you could apply a configurable text line to your photos. Which means that with a frame plugin and two instances of watermark you can use darktable for the most despicable reason ever:

making a meme in darktable

The most important changes in Lua scripting is that script can add buttons, sliders, and other user interface widgets to the lighttable view. To, the team started a new repository for scripts on Github.

Finally, the usual part of every release: updates in the camera support:

  • Base curves for 8 more cameras by Canon, Olympus, Panasonic, and Sony.
  • White balance presets for 30 new cameras by Canons, Panasonic, Pentax, and Sony.
  • Noise profiles for 16 more cameras by Canon, Fujifilm, Nikon, Olympus, Panasonic, Pentax, and Sony.

For a more complete list of changes please refer to the release announcement. Robert Hutton also shot a nice video covering most important changes in this version of darktable:

LGW spoke to Johannes Hanika, Tobias Ellinghaus, Roman Lebedev, and Jeremy Rosen.

Changes in v2.0 could be summarized as one major new feature (printing) and lots of both under-the-hood and user interaction changes (Gtk+3 port, keyboard shortcuts etc.). All in all, it’s more of a gradual improvement of the existing features. Is this mostly because of the time and efforts that the Gtk+3 port took? Or would you say that you are now at the stage where the feature set is pretty much settled?

Tobias: That’s a tough question. The main reason was surely that the Gtk+3 port took some time. Secondly, the main motivation for most of us is scratching our itches, and I guess that most of the major ones are scratched by now. That doesn’t mean that we have no more ideas what we’d like to see changed or added, but at least most low-hanging fruits are picked, so everything new takes more time and effort than big changes done in the past.

Roman: The Gtk+3 port, as it seems, was the thing that got me initially involved with the project. On its own, just the port (i.e. rewriting all the necessary things, and making it compile and mostly be functional) did not took too long, no more than a week, and was finished even before previous release happened (v1.6 that is). But it was the stabilization work, i.e. fixing all those small things that are hard to notice, but are irritating and make bad user experience that took a while.

Johannes: As far as I’m concerned, yes, darktable is feature complete. The under-the-hood changes are also pretty far-reaching and another reason why we call it 2.0.0. The Gtk+3/GUI part is of course the most visible and the one you can most easily summarize.

Jeremy: I’d like to emphasis the “under the hood” part. We did rewrite all our cache management, and that’s a pretty complicated part of our infrastructure. I don’t think this cycle was slow, it’s just that most of it is infrastructure work needed if we want darktable’s visible feature set to grow in the future…

color balance adjusted in darktable

Darktable seems to be following the general industry trend where software for processing RAW images becomes self-sustained, with non-destructive local editing features such as a clone tool, as well as sophisticated selection and masking features. In the past, I’ve seen you talking about not trying to make a general-purpose image editor out of darktable, but these features just seem to crawl in no matter what, you are even considering adding a Liquify-like tool made by a contributor. Would you say that your project vision has substantially changed in the past? How would you define it now?

Tobias: I don’t see too many general image manipulation features creeping in. We have masks since a while, and the liquify/warping thing would be another one, but besides that I don’t see anything. There is also the question where to draw the line. Is everything besides global filters (exposure, levels, …) already a step towards a general purpose editor? Are masks the line being crossed? I don’t know for sure, but for me it’s mostly pushing individual pixels, working with layers, merging several images. We do none of those and I hope we never will.

Johannes: I think this is caused by how darktable is governed. It’s very much driven by the needs of individual developers, and we’re very open when it comes to accepting the work of motivated contributors. we have a large dev basis, so I guess it was just a matter of time until someone felt the need for this or that and just went ahead and implemented it. I guess you could say we weren’t consequent enough in rejecting patches, but so far I don’t think this strategy has hurt us much. To the contrary, it helps to foster a large community of motivated developers.

HDR merging does exist though, and there’s even a feature request to add manual/automatic alignment. And both duplication and configurable blending of processing modules are a lot like working with layers, even though the processing pipeline is fixed.

Tobias: Yes, but that doesn’t counter my point: Editing single pixels is out of context, general calculations like that fit.

Johannes: To give a very specific answer to this very specific question: the HDR merging works on pre-demosaic raw data (which is why we have it, it’s substantially simpler than/different to other tools except Wenzel’s hdrmerge which came after IIRC). So automatic alignment is not possible (or even manual for that matter).

exposure adjusted in darktable

Have you already defined any major milestones for future development?

Tobias: No. Version 2.0 had the predefined milestone “Gtk+3 port”, but that was an exception. Normally we start working on things we like, new features pile up and at some point we say “hey, that looks cool already, and we didn’t have a release for a while, let’s stabilize and get this to the users”. There is a lot less planning involved than many might think.

Roman: As Tobias said, there are rarely pre-defined milestones. It is more like, someone has some cool idea, or needs some functionality that is not there yet, and he has time to implement it.

Personally, I have been working on image operation for highlight reconstruction via inpainting. There are several of them already in darktable, but frankly, currently that is the one of important features that are still not completely handled by darktable.

There has been a lot of preparatory work under-the-hood over the last two releases, which now opened possibility for some interesting things, say native support for Magic Lantern’s Dual ISO, or new version of our profiled denoise image operation.

I’m also looking into adding yet another process() function to image operations, that would not use any intrinsic instructions, but OpenMP SIMD only, and thus, making darktable to not have any hard dependency on x86 processors, i.e. it could work on ARM64 too.

Jeremy: I would like to add the manipulation of actual image parameters to Lua, that is a big chunk of work. Apart from that it will mainly depend on what people do/want to do.

What kind of impact on users’ workflows do you think the adding of Lua scripting has done so far? What are the most interesting things you’ve seen people do with Lua scripting in darktable?

Tobias: Good question. We slowly added Lua support since 1.4, but only now we start to get to a point where more advanced features can be done. In the future I can see quite some fancy scripts being written that people can just use instead of everyone coding the same helpers over and over again. That’s also the motivation for our Lua scripts repository on GitHub. While there are some official scripts, i.e., mostly written and maintained by Jeremy and me, we want them to be seen as an extension to the Lua documentation, so that others can get ideas how to use our Lua API.

The results of that can be seen in the ‘contrib’ directory. The examples there range from background music for darktable’s slideshows to a hook that uses ‘mencoder’ to assemble timelapses. We hope to see many more contributions in the future.

Jeremy: Lua was added mainly for users that have a specific workflow that goes against the most common workflow. Darktable will follow the most common workflow, but Lua will allow other users to adapt DT to their specific need.

That being said, I agree with Tobias that Lua in 1.6 was still missing some bricks to make it really useful. Without the possibility to add widgets (buttons, sliders etc.) to darktable, it was impossible to make a script that was really useable without technical knowledge.

With the Lua repository and the possibility to find widgets, things should go crazy really fast. Did you know that you can remote-control darktable via d-bus by sending Lua commands?

white balance adjusted in darktable

In early days of darktable quite a few features (e.g. wavelet-based) came directly from papers published at SIGGRAPH etc. What’s your relationship with the academic world these days?

Tobias: We didn’t add many new image operations recently, and those that got added were mostly not that sophisticated that we had to take the ideas from papers. That doesn’t mean that our link to the academic world was dropped, Johannes is still working as a researcher in university, and when new papers come out we might think about implementing something new, too.

Johannes: Yes, as Tobias says. But then again graphics research is my profession, and darktable is for fun. No, seriously, the last few siggraphs didn’t have any papers that seemed a good fit for implementation in darktable to me.

Several years ago you switched to rawspeed library by Klaus Post from the Rawstudio project. Now it looks like darktable is the primary “user” of rawspeed, and your own Pedro Côrte-Real is 2nd most active contributor to the library. Doesn’t it feel at least a tiny bit weird? ūüėČ

Tobias: I think it’s a great example of how open source software can benefit from each other. I’m not sure if that’s weird or just a bit funny.

How has your relationship with the Magic Lantern project been evolving, given the deflicker feature etc.?

Tobias: The deflicker code wasn’t so much contributed by the Magic Lantern folks but written by Roman with inspiration from how magic lantern does it. I don’t know if he used any code from them, maybe he can clarify. Apart from deflicker there are also plans to support their dual-iso feature natively.

Roman: The only direct contribution from Magic Lantern project was the highlight reconstruction algorithm that made it into v1.6. The deflicker was implemented by me, as it usually happens, after I needed a way to auto-expose lots of images, and found no way to do it. That being said, it uses exactly the same math as deflick.mo does.

Tobias: Even that was not taking code from them. Jo wrote it after talking with Alex at LGM.

Johannes: But it was most inspiring meeting those folks in person. And yes, I was a lazy ass implementing this dual-iso support natively in darktable ever since LGM.

Darktable seems to be doing pretty well without any kind of community funding which is all the rage these days. What do you think are the causes to that effect?

Tobias: Well, we’d need some legal entity that takes care of taxes. And to be honest, we don’t need that much money. Our server is sponsored by a nice guy and there are no other expenses. Instead we have been asking our users to donate to LGM for several years now and from what we can see that helped a lot.

As for why we have been doing so well, no idea. Maybe because we are doing what we want without caring if anyone would like it. To the best of our knowledge darktable has exactly 17 users (that number is measured with the scientific method of pulling it out of thin air), so whatever we do, we can lose at most those few. Nothing to worry about.


The new version of darktable is available as source code and a .dmg for Mac OS X. Builds for various Linux distributions have either already landed or are pending.

Read More

GIMP 2.9.2 Released, How About Features Trivia?

In a surge of long overdue updates the GIMP team made the first public release in the 2.9.x series. It’s completely GEGL-based, has 16/32-bit per channel editing and new tools. It’s also surprisingly stable enough even for the faint of heart.

Obligatory disclaimer: I’m currently affiliated with upstream GIMP project. Please keep that in mind when you think you stumbled upon biased opinion and thought you’d call LGW out.

One might expect a detailed review here, which totally makes sense, however writing two similar texts for both upstream GIMP project and LGW would seem unwise. So there: the news post at GIMP.org briefly covers most angles of this release, while this article focuses on features trivia and possible areas of contribution.

The GEGL port and HDR

Originally launched in 2000 by a couple of developers from Rhythm & Hues visual effects studio, the GEGL project didn’t have it easy. It took 7 years to get it to GIMP at all, then another 8 years to power all of GIMP.

So naturally, after years and years (and years) of waiting the very first thing people would be checking in GIMP 2.9.2 is this:

First and foremost, 64-bit is there mostly for show right now, although GIMP will open and export 64-bit FITS files, should you find any.

That said, you can use GIMP 2.9.2 to open a 32-bit float OpenEXR file, adjust color curves, apply filters, then overwrite that OpenEXR file or export it under a different name. Job done.

The same applies to PNG, TIFF, and PSD files: respective plugins have been updated to support 16/32-bit per channel data to make high bit depth support actually useful even for beta testers.

All retouching and color adjustment tools, as well as most, if not all plugins are functional in 16/32-bit modes. There’s also basic loading and exporting of OpenEXR files available (no layers, no fancy features from v2.0).

GIMP also provides several tonemapping operators via the GEGL tool, should you want to go back to low dynamic range imaging.

Mantiuk06 tonemapping operation

There are, however, at least two major features in GEGL that are not yet exposed in GIMP:

  • RGBE (.hdr) loading and exporting;
  • basic HDR merging from exposure stacks.

This is one of the areas where an interested developer could make a useful contribution at a low enough price in the time currency.

In particular, adding a GEGL-based HDR merge tool to GIMP should be easier now thanks to a widget for using multiple inputs to one GEGL operation (which would be exp-combine).

GEGL operations

Currently 57 GIMP plugins are listed as completely ported to become GEGL operations, and 27 more ports are listed as work in progress. That leaves 37 more plugins to port, so the majority of the work appears to be done.

Additionally, GEGL features over 50 original filters, although some of them are currently blacklisted, because they need to be completed. Also, some of the new operations were written to implement certain features in GIMP tools. E.g. the Distance Map operation is used by the Blend tool for the Shape Burst mode, and both matting operations (Global and Levin) are used by the Foreground Select tool to provide mask generation with subpixel precision (think hair and other thin objects).

Various new operations exposed in GIMP, like Exposure (located in the Colors menu) and High Pass (available via the GEGL tool), are quite handy in photography workflows.

Note that if you are used to “Mono” switch in the Channel Mixer dialog, this desaturation method is now available through a dedicated Mono Mixer operation (Colors->Desaturate submenu). It might take some getting used to.

Mono Mixer

It’s also worth mentioning that 41 of both ports and original GEGL operations have OpenCL versions, so they can run on a GPU.

And while immensely popular external G’MIC plugin is not going to become GEGL operation any time soon (most likely, ever), since recently it’s ready to be used in conjunction with GIMP 2.9.x in any precision mode.

There are some technical aspects about GIMP filters and GEGL operations in GIMP 2.9.x that you might want to know as well.

First of all, some plugins have only been ported to use GEGL buffers, while others have become full-blown GEGL operations. In terms of programming time, the former is far cheaper than the latter, so why go the extra mile, when GIMP 2.10 is long overdue, and time could be spent wiser?

Softglow

Porting plugins to use GEGL buffers simply means that a filter can operate on whatever image data you throw it at, be it 8bit integer or 32-bit per color channel floating point. Which is great, because e.g. Photoshop CS2 users who tried 32-bit mode quickly learnt they couldn’t do quite a lot, until at least CS4, released several years later.

The downside of this comparatively cheap approach is that in the future non-destructive GIMP these filters would be sad destructive remnants of the past. They would take bitmap data from a buffer node in the composition tree and overwrite it directly, so you would not be able to tweak their settings at a later time.

So the long-term goal is still to move as much as possible to GEGL. And that comes at a price.

First of all, you would have to rewrite the code in a slightly different manner. Then you would have to take an extra step and write some special UI in GIMP for newly created GEGL op. The reason?

While the GEGL tool skeleton is nice for operations with maybe half a dozen of settings (see the Softglow filter screenshot above), using something like automatically generated UI for e.g. Fractal Explorer would soon get you to lose your cool:

Old vs. new Fractal Explorer

The good news is that writing custom UIs is not particularly difficult, and there are examples to learn from, such as the Diffraction Patterns op:

Diffraction Patterns operation

As you can see, it looks like the former plugin with tabbed UI and it has all the benefits of being a GEGL operation, such as on-canvas preview, named presets, and, of course, being future-proof for non-destructive workflows.

FFmpeg support in GEGL

If you have already read the changelog for the two latest releases of GEGL, chances are that you are slightly puzzled about FFmpeg support. What would GEGL need it for? Well, there’s some history involved.

Øyvind Kolås started working on GEGL ca. 10 years ago by creating its smaller fork called gggl and using it for a video compositor/editor called Bauxite. That’s why GEGL has FFmpeg support in the first place.

Recently Øyvind was sponsored by The Grid to revive ff:load and ff:save operations. These ops drive the development of the iconographer project and add video capabilities to The Grid’s artificial intelligence based automatic website generator.

The FFmpeg-based loading and saving of frames could also come in handy for the GIMP Animation Package project, should it receive much needed revamp. At the very least, they would simplify loading frames from video files into GIMP.

New Tools

The new version has 6 new tools—2 stable, 4 experimental. Here’s some trivia you might want to know.

GIMP is typically referred to as a tool that falls behind Photoshop. Opinions of critics differ: some say it’s like Photoshop v5, others graciously upgrade it all the way to a CS2 equivalent.

If you’ve been following the project for a while, you probably know that, anecdotally, the Liquid Rescale plugin was made available a year ahead of Photoshop CS5 Extended. And you probably know that Resynthesizer made inpainting available in GIMP a decade before Content-Aware Fill made its way to Photoshop CS6:

But there’s more. One of the most interesting new features in GIMP 2.9.2 is the Warp Transform tool written by Michael Muré back in 2011 during Google Summer of Code 2011 program.

It’s the interactive on-canvas version of the venerable iWarp plugin that looked very much like a poor copy of Photoshop’s Liquify filter. Except it was introduced to GIMP in 1997, while Liquify first appeared in Photoshop 6, released in 2000.

Warp Transform reproduces all features of the original plugin, including animation via layers, and adds sorely missing Erase mode that’s designed to selectively retract some of the deformations you added. The mode isn’t yet functioning correctly, so you won’t restore original data to its original pixel crisp state, but there are a few more 2.9.x releases ahead to take care of that.

Unified Transform tool is a great example of how much an interested developer can do, if he/she is persistent.

Originally, merging Rotate, Scale, Shear, and Perspective tools into a single one was roughly scheduled for version 3.6. This would prove to be challenging, what with the Sun having exploded by the time and the Earth being a scorched piece of rock rushing through space, with a bunch of partying water bears on its back.

But Mikael Magnusson decided he’d give it a shot out of curiosity. When the team discovered that he had already done a good chunk of the work, he was invited to participate at Google Summer of Code 2012 program, where he completed this work.

Unfortunately, it’s also an example of how much the GEGL port delayed getting cool new features into the hands of benevolent, if slightly irritated masses.

Internal Search System

Over the years GIMP has amassed so many features that locating them can be a bit overwhelming for new users. One way to deal with this is to review the menu structure, plugin names and their tooltips in the menu etc., maybe cut most bizarre ones and move them into some sort of an ‘extras’ project.

Srihari Sriraman came up with a different solution: he implemented an internal search system. The system, accessible via Help->Search and Run a Command, reads names of menu items and their descriptions and tries to find a match for a keyword that you specified in the search window.

Searching action in GIMP

As you can see, it does find irrelevant messages, because some tooltips provide an overly technical explanation (unsharp mask uses blurring internally to sharpen, and the tooltip says so, hence the match). This could eventually lead to some search optimization of tooltips.

Color Management

The news post at gimp.org casually mentions completely rewritten color management plugin in GIMP. What it actually means is that Michael Natterer postponed the 2.9.2 release in April (originally planned to coincide with Libre Graphics Meeting 2015) and focused on rewriting the code for the next half a year.

The old color management plugin has been completely removed. Instead libgimpcolor, one of GIMP’s internal libraries, got new API for accessing ICC profile data, color space conversions etc.

Since GIMP reads and writes OpenEXR files now, it seems obvious that GIMP should support ACES via OpenColorIO, much like Blender and Krita. This has been only briefly discussed by the team so far, and the agreement is that a patch would be accepted for review. So someone needs to sit down and write the code.

What about CMYK?

Speaking of color, nearly every time there’s a new GIMP release, even if it’s just a minor bugfix update, someone asks, whether CMYK support was added. This topic is now covered in the new FAQ at gimp.org, but there’s one more tiny clarification to make.

Since autumn 2014, GEGL has an experimental (and thus not built by default) operation called Ink Simulator. It’s what one might call a prerequisite for implementing full CMYK support (actually, separation into an arbitrary amount of plates) in GIMP. While the team gives this task a low priority (see the FAQ for explanation), this operation is a good start for someone interested to work on CMYK in GIMP.

Digital Painting

Changes to the native brush engine in GIMP are minor in the 2.9.x series due to Alexia’s maternity leave. Even so, painting tools got Hardness and Force sliders, as well as the optional locking of brush size to zoom.

Somewhat unexpectedly, most other changes in the painting department stem indirectly from the GIMP Painter fork by sigtech. The team evaluated various improvements in the fork and reimplemented them in the upstream GIMP project.

Canvas rotation and flipping

Canvas rotation and horizontal flipping. Featuring artwork by Evelyne Schulz.

Interestingly, while most of those new features might look major to painters, they actually turned out to be a low-hanging fruit in terms of programming efforts. Most bits had already been in place, hence GIMP 2.9.2 features canvas rotation and flipping, as well as an automatically generated palette of recently used colors.

Another new feature is an experimental support for MyPaint Brush engine. This is another idea from the GIMP Painter fork. The implementation is cleaner in programming terms, but is quite incomplete and needs serious work before the new brush tool can be enabled by default.

MyPaint Brush tool

Some Takeaways For Casual Observers and Potential Contributors

As seen in recently released GIMP 2.9.2, the upcoming v2.10 is going to be a massive improvement with highlights such as:

  • high bit depth support (16/32-bit per channel);
  • on-canvas preview for filters;
  • OpenEXR support;
  • better transformation tools;
  • new digital painting features;
  • fully functional color management;
  • improved file formats support.

Much of what could be said about the development pace in the GIMP project has already been extensively covered in a recent editorial.

To reiterate, a lot of anticipated new features are blocked by the lack of GIMP 2.10 (complete GEGL port) and GIMP 3.0 (GTK+3 port) releases. There are not enough human resources to speed it up, and available developers are not crowdfundable due to existing work and family commitments.

However, for interested contributors there are ways to improve both GIMP and GEGL without getting frustrated by the lack of releases featuring their work. Some of them have been outlined above, here are a few more:

  • Create new apps that use GEGL (example: GNOME Photos).
  • Port more GIMP filters to GEGL or create entirely new GEGL operations (both would be almost immediately available to users).
  • Create OpenCL versions of GEGL operations.

All of these contributions will directly or indirectly improve GIMP.

With that—thanks for reading!

Read More

Afanasy Render Farm Manager Gets Natron Support

Timur Hairulin released an update of his free/libre CGRU render farm management tools.

The newly arrived version of CGRU features support for Natron, free/libre VFX compositing and animation software, and Fusion, one of its proprietary counterparts, by Blackmagic Design.

Timur has great hopes for Natron:

I still haven’t used it in production yet, because it needs to become more stable first. Once it’s done, getting an artist to use Natron should be easy. After all, it looks and behaves a lot like Nuke. Besides, it has a great Python API. For instance, I don’t need to create gizmo’s on TCL like in Nuke.

Once you install CGRU, you will find CGRU’s Natron plugins in the cgru/plugins/natron folder. You should add this path to the NATRON_PLUGIN_PATH environment variable. This will make Afanasy node available in Natron. Further documentation is available at project’s website.

Support for Fusion was added by Mikhail Korovyansky. He tested it on v7, but v8 should be supported as well.

Additionally, Keeper now allows quickly changing local render user name, and rules now allow player linking to the current frame.

Given already existing support for Blender in CGRU, getting a complete libre-based studio solution should be closer now.

CGRU 2.0.7 with Natron and Fusion support are available for downloading for both Linux and Mac OS X users.

Read More

3D printing support in CUPS demystified

Last week Apple released a new version of CUPS, the default printing system on UNIX and Linux, with what was called “basic support for 3D printers” by pretty much all media, with no details whatsoever. This has already caused some confusion, so we spoke to Michael Sweet and a few other stakeholders about CUPS, the IEEE-PSTO Printer Working Group, and the 3D initiative.

What’s the scope?

Most confusion was caused by the lack of understanding or, rather, the lack of explanation of what CUPS has to do with 3D printing, and how far the PWG’s 3D initiative is supposed to go. This question can easily be answered by the slides from the first birds-of-feather face-to-face meeting almost a year ago.

Essentially, it boils down to these few points:

  • networked 3D printers provide little or no feedback over the network;
  • there is no single standardized network protocol for them;
  • there is no open file format to handle most/all state-of-the-art 3D printing capabilities.

So the idea is that users should be able to:

  • easily access a networked printer that has the required materials, and submit a print job;
  • print multi-material objects in a single-material 3D printer, which means the printer gets instructions to stop at a certain layer, let the user change materials, and then proceed further;
  • remotely track printing progress;
  • receive notifications about clogged extruder, filament feed jam, running out of PLA, etc.

As you can see, these requirements are pretty much what people are already used to when dealing with common networked 2D printers in offices.

To aid that, since their first get-together in August 2014, members of the birds-of-feather meetings have been working on a white paper that defines an extension to the Internet Printing Protocol to add support for additive manufacturing devices. The whitepaper is focused, but not limited to fused deposition modeling and takes into consideration cloud-based printing.

Suggested extensions to IPP include various new attributes like material name, type, and color, print layer thickness, current extruder temperature, various printer description attributes, and more.

While the whitepaper is getting increasingly detailed with each revision, in a conversation with LGW, Ira McDonald (High North, a PWG member, PWG secretary and IPP WG co-chair) stressed:

This is NOT a standards development project in PWG yet (and may never be). We do have several 3D printer manufacturers and major software vendors who have contributed ideas and privately expressed support. But we’re not at the consumer promotion stage yet. We’re engaging 3D Printing vendors and other standards consortia to gauge interest at present.

Currently CUPS is only used as a testbed for the whitepaper. Michael Sweet (Apple, CUPS, PWG Chair and IPP WG secretary) explains:

CUPS 2.1 added a “3D printer” capability bit to allow 2D and 3D print queues to co-exist on a computer. There is no explicit, out-of-the-box support for 3D printers there, but we’ll be able to experiment and prototype things like the white paper to see what works without seeing 3D printers in the LibreOffice print dialog, for example.

So when you read about support for 3D printers in CUPS elsewhere in the news, you should make a mental note of using a lot of quote marks around the word “support”.

Exploring file formats standardization

The whitepaper only vaguely touches the topic of an Object Definition Language to be used and cautiously suggests AMF file format (ISO/ASTM 52915) developed by ASTM Committee F42 on Additive Manufacturing Technologies, comprised of pioneers of additive manufacturing such as David K. Leigh and representing businesses and institutions such as Met-L-Flo Inc., Harvest Technologies (Stratasys), NIST etc.

AMF has certain benefits over some older file formats common in manufacturing: multiple materials support, curved surfaces, etc. Unfortunately, the specification is not freely available which has hampered its adoption.

Additionally the participants of the BOF meetings evaluated other options such as STL, DAE (COLLADA), and, more interestingly, 3MF — a file format designed by Microsoft and promoted by 3MF Consortium that brings together companies like HP, Autodesk, netfabb, Shapeways, Siemens, SLM Solutions, Materialise, and Stratasys.

Earlier this year, Michael Sweet reviewed the v1.0 specification of the 3MF file format. He disagreed with some design decisions:

  • the ZIP container makes streaming production almost impossible and adds space and CPU overhead;
  • the job ticket is embedded into document data (and shouldn’t be);
  • limited material support, the only attribute is sRGB color;
  • all colors are sRGB with 8 bit per component precision, CIE- and ICC-based DeviceN color is missing;
  • no way to specify interior fill material or support material.

Even though the Consortium isn’t particularly open, Michael says he’s been in conversation with both the HP and Microsoft reps to the 3MF Consortium:

Based on the responses I’ve received thus far, I think we’ll end up in a happy place for all parties. Also, some of the issues are basically unknowns at this point: can an embedded controller efficiently access the data in the 3MF ZIP container, will the open source 3D toolchains support it, etc. Those are questions that can only be answered by prototyping and getting the corresponding developers on board.

So there’s still work to do on this front.

For developers, the 3MF Consortium provides an open source C++ library called lib3mf, available under what appears to be the BSD 2-clause license.

Who are the stakeholders in the initiative?

First of all, to give you a better idea, the Printer Working Group is a program of the IEEE-ISTO that manages industry standards groups under the IEEE umbrella.

According to Michael Sweet, several PWG members had expressed interest in a 3D track during face-to-face meetings and offline, so the steering committee agreed to schedule BOFs at subsequent face-to-face meetings, starting with the August 2014 one.

Mixed Tray in Stratasys Connex1 3D printer

Mixed Tray in Stratasys Connex1 3D printer

This is where it gets interesting. None of the current Printing Work Group members are, strictly speaking, core 3D companies. Here’s what it looks like:

  • HP is in partnership with Stratasys and Autodesk (using their Spark platform) and planning to start selling their own Multi Jet Fusion units in 2016.
  • Canon and Fuji Xerox already resell CubePro and ProJet printers made by 3D Systems, and Kyocera got into a partnership with 3D Systems in March 2015 for the very same reason.
  • Brother was last heard (in early 2014) reconsidering to enter the 3D printing market some time in the future.
  • Epson expressed (also in early 2014) the lack of interest in producing consumer-level units and wanted to make industrial 3D printers within next several years.
  • Xerox has been in business with 3D Systems at least since 2013, when they sold part of their solid ink engineering/development team to 3D Systems “to leverage both companies’ 3D printing capabilities to accelerate growth and cement leadership positions”. Moreover, in January 2015, Xerox filed a patent for Printing Three-Dimensional Objects on a Rotating Surface.
  • Ricoh made a loud announcement in September 2014 about jumping into 3D printing business and leading the market, but so far they are simply reselling Leapfrog 3D Printers in Europe and providing printing services in two fablabs in Japan.
  • Samsung, as some sources assert, isn’t planning to enter the market until ca. 2024, however in September 2014, they filed a patent that covers a new proprietary multicolor 3D printing process, and in 2015 they partnered with 3D Systems for a few trade shows.
  • Intel has no related products, but they do support Project Daniel which uses 3D printing to make prosthetic arms for children of war in South Sudan.
  • Most other companies are in the consulting and software/network solutions development business.

Neither of the market founding companies like Stratasys and 3D systems (both launched in late 1980s) are in the PWG. However, since this project is still at a very early stage of evolution, we probably should not expect this to change soon.

Even so, reportedly there’s some off list activity. When asked about the interest of 3D printer vendors in standardization, Michael Sweet replied:

My impression is that while they are interested they are also just starting to look at supporting networking in future products — still a bit early yet for most. Both Ultimaker and Microsoft have provided technical feedback/content that has been incorporated into the white paper, and I’ve been promised more feedback from half a dozen more companies, many of whom actually make printers and software tools for 3D Printers.

The 3D BOF participants have been reaching out to vendors since late 2014, but there are still more companies to talk to. LGW contacted Aleph Objects, Inc., the makers of FSF-approved LulzBot 3D printers. In a conversation, Harris Kenny stated that the team at Aleph Objects hadn’t heard of the PWG 3D initiative before, but is interested in following its progress.

LulzBot TAZ 3D printer

LulzBot TAZ 3D printer

What gives?

While 3D printers are slowly getting common in companies that need rapid prototyping services and even creeping into households of tinkerers, we are not likely to see them as common as 2D printers any time soon.

A recent study by BCC Research suggests that the global market for 3D printing will grow from $3.8 in 2014 to nearly $15.2 billion in 2019. At the same time, another recent research by Smithers Pira estimates the global printing market to top $980 billion by 2018. There’s a deep black abyss between these two numbers.

The good news is that by the time anyone, for good or bad reason, can own a 3D printer, we might already have all the software bits and protocols in place to make it just work.


Feature image is Sculpture #10 by Pyromaniac.

Read More

ArgyllCMS 1.8.0 released with support for SwatchMate Cube colorimeter

Graeme Gill released a major update of ArgyllCMS with newly added support for two color measurement devices from opposite ends of price and quality spectrum.

The first supported instrument is SwatchMate Cube, a little fancy colorimeter you can carry around to pick a color swatch from wherever you want, then review the acquired palette on your mobile device (iOS, Android), paste to your Photoshop project etc.

SwatchMate Cube

Cube was successfully crowdfunded a year and a half ago on Kickstarter and caused quite a bit of media excitement as if it was the first portable device ever to pick colors from physical objects (it wasn’t).

Graeme got a Cube mainly for two reasons: because it was made in Melbourne where he lives, but also to see, how this entry-level device (ca. $180 USD) stacks up against more expensive and more commonly used instruments like X-Rite ColorMunki. He ended up writing a two-part article where he explained why and how much exactly readouts by Cube are hit and miss (especially for glossy surfaces), and how the device could be further improved.

The other newly supported device is EX1 by a German company called Image Engineering. EX1 is a spectrometer for measuring light sources. At 2.800,00€ it’s not exactly something you would throw some spare cash, but rather something you get to ensure the highest color fidelity in the professional environment.

Image Engineering EX1

Other changes include:

  • support for Television Lighting Consistency Index (EBU TLCI-2012 Qa) in spotread and specplot apps’ output;
  • support for adding R9 value to CRI value in spotread and specplot apps’ output;
  • various bugfixes, library dependencies updates etc.

For a complete list of changes have a look at the website. In addition to source code, builds are available for Linux, Windows, and OS X.

Graeme also updated his commercial ArgyllPRO ColorMeter app for Android. The new version features pretty much all improvements from the new ArgyllCMS release. It also receives readouts from Cube via Bluetooth Low Energy (USB is available too) and supports using the ChromeCast HDMI receiver as a Video Test Patch Generator. As usual, a demo version of the app is available

Read More

GEGL gets mipmaps, 71 new image processing operations

GIMP’s new image processing engine got its first update in three years, and it’s so full of awesome you’d cry and demand GIMP 2.10 released right next to it.

Supernova GEGL operation in dev. build of GIMP

Supernova GEGL operation in development build of GIMP

A lot of work has gone into making GEGL faster. There’s still a lot of work to be done, but the new version features major improvements such as:

Better thread-safety and experimental multithreading support. You can run e.g. ‘$ GEGL_THREADS=4 gimp-2.9’ from terminal window. But don’t expect this to automagically improve performance: it still needs a lot of testing, and developers are interested in thoughtful reports.

Experimental mipmaps support. If you are not familiar with mipmaps, here’s the basic idea. Instead of working on a huge image in its entirety, an application generates a smaller version of the original image and processes it for preview. While you are evaluating the preview, it silently chews the real thing in the background. Again, it’s an experimental feature currently not used by GIMP, whether it will prove to be GIMP 2.10 material depends on contributors activity.

New default tile backend writes to disk in a separate thread. This should make GIMP more responsive while saving/exporting files.

GEGL 0.3 also got 71 new image processing operations. Mostly they are ports of existing GIMP filters, and that automatically makes them eligible for the future non-destructive editing workflows. A lot of that work was done by Thomas Manni who is among the most silent and hard-working GEGL contributors of late.

However, porting GIMP filters to GEGL doesn’t necessarily end at writing a GEGL operation and compatibility code for GIMP to keep the operation accessible for plugins and scripts. Some GEGL filters like the Fractal Explorer have a lot of options, hence automatically generated user interface may simply not fit even a 4K display vertically.

Automatically generated UI for Fractal Explorer port on a 1920x1280 display

Automatically generated UI for Fractal Explorer port on a 1920×1280 display

To fix that one needs to write a custom user interface in GIMP. This started creeping into GIMP’s code base about a year ago. Diffraction Patterns operation is among notable examples of making a familiar interface with all the benefits of using GEGL tool’s skeleton, such as presets and live preview on canvas.

Diffraction Patterns has a compact custom user interface much like the original GIMP plugin

Diffraction Patterns has a compact custom user interface much like the original GIMP plugin

On a related note, one of slightly nerdy new features of this GEGL release is ‘ui_meta’. Basically, now GEGL operations can provide useful hints to GEGL-based applications about best ways to render user interface for various properties.

Here are just a few examples. If you want GIMP to display a rotary widget for the quick setting of an angle, you can add 'ui_meta ("unit", "degree")' to the property in question.

Rotary widget for quickly choosing an angle in development build of GIMP

Rotary widget for quickly choosing an angle in development build of GIMP

The (“unit”, “relative-coordinate”) meta will create a button next to input field, by clicking which you will be able to pick a relative position from your image, for example, the center for a Zoom Motion Blur effect.

Additionally, if there are two adjacent properties, where the first one has (“axis”, “x”) meta, and the other one has (“axis”, “y”), GIMP will create a chain button for these two values, so that you could e.g. lock ratio between the two values or keep them equal.

X and Y values can be locked to each other, and you can pick a relative position

X and Y values can be locked to each other, and you can pick a relative position

More work needs to be done on range of proprerties’ values exposed in user interface.

But wait, there’s more. Jon Nordby backported all the changes to GEGL he made while working on The Grid, an artificial intelligence based CMS that relies on GEGL for all image processing work. One of them is reading custom GEGL operations written as JSON files.

img_flo web app for creating node compositions with GEGL operations

img_flo web app for creating node compositions with GEGL operations

The idea is to reuse the concept of meta-operations already available in GEGL for a very long time. E.g. such a core filter as unsharp mask is actually a meta-operation that combines the use of several other operations: add, multiply, subtract, and Gaussian blur. You can create your own meta-operations of any complexity with img_flo web app, then use them from within GIMP.

Finally, just to avoid confusion, newly released GEGL 0.3 is not something you can “install” into existing stable version of GIMP and automatically get all the new features. It’s best to treat this as a foundation of what’s coming in GIMP 2.10 and beyond.

92 people contributed to making GEGL 0.3 happen, but there’s still plenty of contribution opportunities for everyone: porting more filters, improving default range of values, descriptions etc., making further performance improvements, adding new exciting features

Read More

Krita raises over ‚ā¨33,000 at Kickstarter

Earlier this week, Krita Foundation successfully raised the money to fund 6 months of work on the increasingly popular free digital painting application for Linux and Windows.

The campaign launched on May 4. Two weeks into the fundraiser, 643 backers brought €20K (the baseline for the project to succeed), then 322 more pledged another €10,520. Additionally, the team received €3,108 donations via PayPal and will use that money to work on features from the list of 24 stretch goals.

Much like last year, the team started working on some of the stretch goals already during the fundraiser: modifier keys for selections, stacked brushes, basics of memory management (reporting if you are about to overuse RAM). The upcoming stable v2.9.5 release will feature these an many other newly added features and fixes.

File size warning in Krita

File size warning in Krita

In the coming days developers will be processing submitted surveys from users who pledged €15+ and thus got the right to vote for stretch goals, then continue working on both core tasks — performance boost and animation — and the stretch goals.

Both Google Summer of Code projects — animation and tangent normal map brush engine — are being actively worked on. You read Jouni Pentikäinen’s blog to follow his progress on animation.

Meanwhile the team published several interviews with artists who depend on Krita in their work, including David Revoy. A longer and very insightful interview with David was also done by Erik Moeller; it focuses on topics such as art, merits of different licenses, crowdfunding models etc.

Read More