Tag Archives: Software

Scribus 1.5.4 released with CxF3 and initial QuarkXPress support

The Scribus team announced the release of a new version of this free desktop publishing software, featuring some interesting improvements.

While v1.5.4 is officially called “stable development release” (something that former GIMP 2.9.x users can relate to), the idea is to eventually release a “stable stable” v1.6.0 (no timeframe is given).

Much like simultaneously released v1.4.7 (stable branch), Scribus 1.5.4 features a lot of fixes, particularly for “fringe uses” of PDF files generation, as well as security fixes. But a few other changes stand out.

The main change in this version is newly added support for color swatches in the CxF3 file format, designed by X-Rite and standardized by ISO. The initiative was fueled by the FreieFarbe project that has a confusingly large scope, but mostly aims at making a positive change in the industry where Pantone and other proprietary color systems dominate.

Editing a color in LAB in Scribus 1.5.4

Technically, CxF3 is superior to most existing formats for storing color palettes/swatches as it supports storing color values in various models, including device-independent CIE LAB, LCH, and XYZ.

It also supports storing color as spectral values (either transmittance or emissive) in nanometers. And on top of that, you can write actual color recipes into these files, with colorant names and their respective percentages.

Currently, Scribus is the first desktop publishing software to support CxF3 files.

There are more consequences to this. First of all, originally, Scribus required at least 16-bit per channel precision for the HLC Colour Atlas by FreieFarbe, but Jean Ghali went ahead and made this to be 64-bit per channel with floating point precision. Secondly, Adobe IDML and VIVA Designer import filters have been updated to support the LAB color model.

Another user-visible feature is a contribution from Terry Burton who updated the Barcode plug-in to support DotCode and Ultracode in Two-dimensional symbols group, and GS1 North American Coupon in the GS1 DataBar Family.

DotCode in Scribus 1.5.4

Finally, Scribus continues reusing libraries created by the Document Liberation Project and introduces initial support for importing ZonerDraw (v4 and v5) and QuarkXPress (v3 and v4) documents (see our earlier report on libqxp for more info).

Scribus 1.5.4 is available for downloading for a variety of Linux-based systems, as well as for Windows and macOS.

Read More

GIMP 2.10 released, what’s up with the new release policy?

Six years and several unstable releases since 2.8.0, GIMP 2.10 is out in the wild for the general public.

Release highlights include features that users have been asking the team all along:

  • Processing with 16/32-bit per color channel precision, integer or float;
  • Optional linear RGB workflow;
  • 15 new blending modes, including Pass-Through;
  • Color management rewritten and now a core feature, all color widgets are now color managed, ICC v3 profiles supported;
  • CIE LCH and CIE LAB color models now used in a few tools;
  • Performance improvements for some filters thanks to multi-threading;
  • New Unified Transform (rotating, scaling, perspective etc. in one go), Handle Transform, and Warp Transform (think Photoshop’s Liquify) tools;
  • Gradient tool now allows on-canvas editing;
  • Digital painting improved with threaded painting, canvas rotation and flipping, new MyPaint Brush tool, symmetry painting;
  • Exif, XMP, IPTC, DICOM metadata viewing and editing;
  • Newly added support for WebP, OpenEXR, RGBE, HGT;
  • Improved support for TIFF, PNG, PSD, PDF, FITS;
  • Pre-processing of raw digital photos with darktable or RawTherapee at your preferences (more processors can be plugged);
  • Over 80 GEGL-based filters with on-canvas preview, including custom split before/after preview.

Having written most of the release notes for 2.10, I don’t particularly intend to do a full review here (yes, it’s my disclaimer that I’m affiliated and thus biased). Instead here is something I’d like to share regarding ongoing development of GIMP and future plans.

Is GIMP dead, stalled, or picking up pace?

Over the past few years, I’ve heard these and other ideas quite a few times. Pat David recently covered that (among other things) in his famous talk at SCaLE:

Essentially, since a couple of years, the workload has shifted from Michael Natterer (who, at some point, did ca. 80% of the work) towards other contributors. These days, Michael, Jehan (Pagès), and Ell do about the same amount of commits — 25%, 25%, and 24% respectively, per OpenHub’s data for sliding 12 months — although one might successfully argue that this is a stupid metric.

In terms of focus, Michael does most of the under-the-hood work and some user-visible stuff, Jehan does mostly bugfixing and adds painting-related features, and Ell does all of that and he makes impressive contributions to GEGL.

On the backend side of things, Øyvind Kolås is very active with GEGL and babl, and he gets a lot of help from Debarshi Rey (GNOME Photos) and Thomas Manni. Thomas, in particular, was instrumental in getting as many GIMP filters ported to GEGL as possible. If you’ve been waiting for Shadows-Highlights to become available in GIMP, Thomas is the one you should be thanking.

All in all, the development pace has been about the same throughout all of 2.10 development cycle. The difference that made people think GIMP was first stalling and is now picking up pace is simple: more frequent releases. Let’s have a look:

  • May 2012: 2.8.0 released. GEGL port for tiles management in future 2.10 announced.
  • 3.5 years later: GIMP 2.9.2 released, that’s the first unstable version.
  • 8 months later: GIMP 2.9.4.
  • 13 months later: GIMP 2.9.6.
  • 4 months later: GIMP 2.9.8.
  • 3 months later: GIMP 2.10.0 Release Candidate 1.
  • 1 months later: GIMP 2.10.0 Release Candidate 2.

It appears that developing major new features in branches wasn’t sufficient. Hence a new solution.

Relaxing development/release policy

During Libre Graphics Meeting 2017, the team decided to relax the policy and start introducing new features to stable versions, where it’s technically feasible, starting with 2.10. That way, most work would be done on the main development branch (master), and new features would be backported to the 2.10 branch when it’s possible.

One particular reason for that is the upcoming work on completing the GTK+3 port. Simply put, nobody knows how much time this is going to take, just like nobody knew how much time the GEGL port would take. If I wanted to scare you, I could say that we might be looking at another five or six years long development cycle. But noone really knows.

That’s why it’s important to keep giving users new exciting stuff, while “boring” work on internals is ongoing.

What kind of new stuff? Again, it depends. E.g. there is a certain interest and even some preliminary work to add mipmaps support to improve performance for resources-hungry operations. There are more painting-related optimizations to be done. Or there could be something completely unexpected. Completing the N-Point Transform tool maybe? I’d love that!

Which leads us to the next point.

Development priorities

People have been rightfully wondering, how much time it’s going to take the team to implement the most desired feature: non-destructive editing. Even if we are looking at a couple of years leading up to 3.0, there would be at least a year or two to complete 3.2 (but, again, nobody really knows).

And then the CMYK/spot colors support is in the Future section of the roadmap. So is the autoexpansion of layer boundaries… And the list goes on. So there’s this idea that priorities should be swapped.

The problem with this is that GTK+2 currently used for UI of GIMP 2.10 is barely maintained. While the GIMP team doesn’t like everything they see in GTK+3 (oh, some of the IRC conversations!), the newer version is vastly superior in almost every aspect.

But here is some good news: priorities do in fact change based on activity of developers, if the foundation for new features is ready.

Case in point: the Unified Transform tool was originally in the Future section too. Then Mikael Magnusson arrived and just did the work. So now you can enjoy doing all of your rotating, scaling etc. in one go.

And that’s where relaxing the release policy come in handy again. It’s difficult to engage new developers when their work is likely to see the light of day as a stable release in an unknown period of time. It’s a lot easier to do that when you have regular stable releases with new features.

Is this going to work? We’ll just have to wait and see.

In the meantime, you can support the work of Jehan Pagès and Øyvind Kolås on both GIMP and GEGL on either Patreon or Liberapay. This page on GIMP.org lists all available options.

And the last revelation: I have both Patreon and Liberapay campaigns too, but frankly, I spend so much time on GIMP that I’m not even sure that my work on LGW is something you would be rewarding me for. Tell me!

Read More

Introducing libresvg, a rival to librsvg and QtSvg

Evgeniy Raizner announced the first public release of libresvg. This new SVG rendering library aims to replace librsvg and QtSvg, as well as become alternative for using Inkscape as an SVG to PNG converter.

In the community, Evgeniy is mostly known for SVG Cleaner, a very useful tool for making SVG files a lot smaller by removing all the cruft such as unused and invisible elements. He started libresvg about a year ago and has been working on the first release ever since. Today, libresvg v0.1 supports a subset of SVG Full 1.1 without a number of elements (more on that later).

The reason why libresvg exists is that Evgeniy is quite unhappy with existing options such as librsvg (further as rsvg) and QtSvg (SVG Cleaner has a Qt GUI). He claims that the former has serious architectural issues, plethora of parser bugs, and is difficult to ship on platforms other than Linux (being hardwired to Cairo and glib).

Comparison between various SVG renderers

At the same time, QtSvg has a rather incomplete support for SVG elements.

Design specifics

While libresvg is written in Rust, and librsvg is being ported to Rust as well, there are some technical differences between the two that Evgeniy outlined in both his original post at linux.org.ru and a private email exchange. They mostly boil down to how he tries to avoid things he sees as architectural imperfections of rsvg.

First of all, libresvg is designed differently. It parses an SVG document into DOM, does some preprocessing such as cruft removal and markup normalization, then constructs a simplified DOM that contains commands for the rendering backend. Parsing and other steps are done with his own toolchain (xmlparser, svgparser, svgdom) compiled into a single binary file that is a command-line converter.

With libresvg, preprocessing only happens once (Evgeniy claims that it doesn’t seem to be the case for rsvg when rendering to canvas), then 99% of the rendering time is spent on the Cairo/Qt side. Which also means smaller CPU footprint for the library.

So yes, there’s that too: libresvg supports multiple drawing backends. Qt and Cairo are already done, Skia is on the roadmap.

How much of SVG is supported

As of v0.1, libresvg surpasses QtSvg in terms of SVG compliance, but needs to gain support for more SVG elements to be on par with rsvg. The  support for animations, scripting, and SVG fonts is not planned.

SVG compliance test chart

When compared to rsvg, this is what libresvg v0.1 looks like:

  • Libresvg doesn’t yet support filters, clipping paths, masks, markers, and patterns (which rsvg does support to an extent).
  • Libresvg has a complete support for gradient fills, while rsvg cannot inherit attributes and validate them, nor can it read single-stop gradients (swatches, typical for SVG documents produced with Inkscape).
  • Libresvg has better support for text rendering: librsvg doesn’t read xml:space and text-decoration, it also doesn’t always render multiline text correctly and doesn’t support tspan very well.
  • Libresvg has better, though still incomplete support for CSS 2.

Evgeniy is currently hesitant to start working on SVG 2 support as the spec isn’t completed yet, nor has there been decision on what new features will make it to the W3C recommendation.

Further work

One last important thing is that support for sprites is currently planned for v0.2. So if you expected to start using libresvg instead of Inkscape to convert master SVG documents (e.g. all icons in a single SVG file) to multiple PNG files, you’ll have to wait a bit. The developer will have to implement transferring element IDs from the original document to simplified DOM first.

Evgeniy doesn’t yet use his new library in SVG Cleaner, but that’s temporarily. He says he might return to this after releasing libresvg v0.3.

Source code of libresvg and the involved toolchain is available on GitHub. At some point in the future, the project will probably be renamed for fairly obvious reasons. Evgeniy accepts ideas on that.

Read More

Valentina Fork Settles Down As Seamly2D, Valentina Goes On

Four months into a bizarre fork of Valentina, free pattern-making software for fashion designers, Susan Spencer’s leg of the fork finally gets rebranded as Seamly2D.

The are now two projects that share the proverbial 99% of code base: 1) original Valentina project, forked by its founder Roman Telezhinsky, 2) Seamly2D, managed by Valentina’s other founder, Susan Spencer. But let’s roll it back a bit.

The Story

The project was started by Roman Telezhinsky (Ukraine) and Susan Spencer (USA) in 2013. Both founders had previous attempts at writing software for pattern-as-in-clothes design. However, within the Valentina project, Roman took the role of writing the code, while Susan quickly gravitated towards community building, PR, handling financials (paying Roman’s salary, in fact) etc.

Early on, Roman took position that basically boils down to this (opennet.ru, 2013):

I work on this project for myself. If anybody else needs it—great. If nobody else needs it, it’s fine as well.

Depending of where you are coming from, this either contradicts or complements his more official statement (Valentina blog, 2013):

It’s clear that a single person cannot realistically create such a program. So I made it an open project, hoping that I won’t be the only one interested in it. I hope it doesn’t stop at that.

Despite this rather blunt classic approach to publishing software under terms of GPL, users soon started gathering around the Valentina project. The two main reasons for that were technical excellence of the software (despite a lot of rough edges) and solid community work.

The former can be explained by introduction of parametric design to software for end-users, which greatly simplified making adjustments, as well as refitting an existing design to a completely different person.

Moreover, with over 50 pattern-making systems supported, this made the project somewhat popular with designers of contemporary clothes as well as the historical recreation community, since a significant part of the supported systems cover Victorian tailoring, as well as garment cutting from even earlier centuries.

There’s something else that should be factored in to explain public’s interest in Valentina/Seamly2D. Pattern-making software is mostly proprietary and very expensive even for personal use. Top-notch systems like Gerber AccuMark and Lectra Fashion PLM are targeted at large companies and are in the general arm/leg/kidney ballpark price-wise. If you know exactly how much either of them costs, congratulations—you are an owner of a large fashion business with hundreds of employees.

Less expensive options typically start around $1,000. Some cheaper (and simplistic) solutions exist, and even then vendors would try to charge you for every single extra feature.

And, to the best of our knowledge, none of the above have native Linux versions. Needless to say, none of them is free-as-in-speech.

A user who commented on sodaCAD blog back in 2014 pretty much nailed it:

I’ve been in the pattern making industry for over 20 years and we REALLY NEED a free/cheap/open solution. It’s almost impossible to hire skilled operators in New Zealand simply because nobody can afford to buy the software and get skilled up in it.

That’s why breaking the Valentina team in two was dangerous, if inevitable. But this is not the usual “a couple of programmers had a technical argument”. Digging into the story of the conflict between the founders has been an exceptional, if frightful source of insights into the world of Things That Can Go Wrong On So Many Levels.

  • Language barrier? Check.
  • Mutual misunderstandings and apparent lack of persistence to clear things up? Check.
  • Huge project vision clashes? Check.
  • Being borderline rude to potential contributors? Check.
  • Alleged locking one founder out of direct communication to potential partners by another founder? Check.
  • Social awkwardness of one founder enabled by the tendency of the other founder to sweep the dust under the rug? Check.

Arguably, so far the most sensible comment on the whole situation comes from Mario Behling who, at some point in the past, unsuccessfully tried bringing Roman to live in Berlin and work on the project in a hackerspace:

In my opinion they should just calmly do their own things and let it be. I think their worlds are just too far apart.

It’s hard to tell how calm they can get. In his most recent post, Roman summarizes his vision of working with a community and uses what one might call “brutal honesty”. The statements go well into the uneasy territory, breaking almost every rule of contemporary community management. If anything, they hint at exactly how difficult it could be working with him for other contributors—something he readily admits in both private conversations and earlier public posts.

And then what?

We could leave it at that, was it not for the fact that four months into the fork, the amount of confusion about the two separated projects is still staggering. Not in the least place because it’s caused by actual stakeholders.

Case in point. A few weeks ago, Susan Spencer launched the Fashion Freedom Initiative (FFI) which is:

…an open community of indie designers, forward thinking businesses, artisan producers, makers, crafters, hackers and doers. We are working together to build and run our own, independent chains for global fashion production.

The initiative seems like an interesting approach to solving quite a few things that are wrong with the fashion industry. The founders appear to rely on Seamly2D as its strongest community-building tool. So it’s expected that the project started posting user stories.

The first such story, a Seamly2D testimonial by Megan Rhinehart, founder of Zuit, is a great inspirational read, save for several statements she made.

One of the things I love about Seamly2D is that it is getting translated into so many languages.

It’s not. The Transifex account that Susan Spencer keeps pointing users to is owned by Roman Telezhinsky. They are not translating Seamly2D. They are translating Valentina and probably don’t even know that.

Moreover, she couldn’t be using Seamly2D, unless it was a private build from Git master made within the last couple of weeks. There are simply no builds of Seamly2D to download yet, nor have there been releases of Seamly2D. The 0.6.0.1 release was made a full month prior to the final rebranding. Susan Spencer got the valentina-project.org domain name and the website as part of the separation deal. The downloads section of the website still distributes Valentina builds. It even says “Valentina” right on the front page, next to “Seamly2D”.

[Seamly2D] is cloud-based so I can see what the tailor sees. I could potentially add users to help with pattern design and quality control.

Seamly2D is not cloud-based, nor is Valentina. It’s a Qt/C++ desktop application that has to be downloaded and installed. When asked for clarification, Ms. Rhinehart replied that there was “a third party app to run it on the cloud” involved. As of December 7, the testimonial retains the original, unedited statement.

It also doesn’t help that Seamly2D has two simultaneously maintained GitHub repositories (more on that later). Some of that confusion can be explained by the fact that the separation agreement was made hastily, angry conversations went on for a while, and there were no clean cuts.

Present State of Affairs

In terms of writing actual code, this is what things look like at the moment.

Roman more or less maintains the programming pace, fixing bugs, making various enhancements, writing new features, and publishing test builds. August through October was a busy time for him, less so for November, and he expects December to be a slow month for the project as well.

Code-wise, Seamly2D isn’t as efficient so far. Currently, the project confusingly operates on two GitHub repositories:

  1. The one with rebranded repo name and all (or most) converted issues from the Bitbucket tracker, yet without latest changes: https://github.com/fashionfreedom/seamly2d.
  2. The one with the old name and yet with all recent changes, including rebranding in the source code and visuals: https://github.com/valentina-project/vpo2

In fact, since August, changes in what is now Seamly2D code base boil down to rebranding, updating/fixing the build system and setting up automatic builds on a new account, updating various build/contribution related docs, renaming icons, and improvements in generating tiled PDF files. That is, the vast majority of changes doesn’t fix bugs or introduce new features.

During a conversation on September 11, Susan Spencer stated:

Since Roman left, we’ve received offers to contribute from four programmers. They are waiting on the issues list to be recreated.

This is an important part, because the alleged pushing away of contributors by Roman was one of the biggest concerns mentioned by Susan.

However, three weeks after this step was completed, source code changes still weren’t pouring into the repository. We asked Mrs. Spencer for an insight on that, and then a weird thing happened:

  1. She changed the narrative into what boils down to “we do have programmers, but they are currently unavailable”.
  2. She then provided a rather believable explanation for each “missing programmer” case, without naming anyone or giving away too many details in order to protect the privacy of the alleged future contributors.
  3. Following that, she mentioned another technical detail about all of them that, if published, would raise questions about possibility of actual programming to be done in the project.
  4. Finally, she specifically forbid publicly mentioning specific information she provided out of “fear that … there could be a big questionmark on our community” within this article.

Instead, Mrs. Spencer provided this statement:

I would like for the take-away from all this to be that our all-volunteer community is handling the situation rather well. They are an open, honest, and upstanding group of nice people who care about each other and about the project. I’m quite proud of them.

All in all, Susan Spencer seems genuinely defensive of the community she helped growing, although in this particular case this leads to questionable PR tactics.

Aftermath

It would be extremely easy to blame either side for what’s going on with both projects currently. However even from what’s left unmoderated in the forum it’s clear that there has been a lot of mutual hostility, but above all—lack of understanding coming from both founders and community members. Some of it continues to pour out one way or another.

Maintainers of both Seamly2D and Valentina emphasize that their projects are doing well. However the former has been mostly lacking visibly active developers since day one, and the latter doesn’t get nearly as much community awareness as before.

In the coming months/years, we are likely to see for ourselves, whether a community/PR manager can build a team of developers, and whether a developer can succeed in building a strong dedicated community.

If you ask, which project you should be tracking from now on, the best we can get you is “both, if you can stand occasional passive aggression outbreaks and nasty remarks”. Nobody actually promised that free software would be a peaceful ecosystem. But it will get better.

Read More

Document Liberation Project announces initial QuarkXPress support

The Document Liberation Project (DLP) announced the first release of libqxp, a library for reading QuarkXPress 3.3—4.1 documents. And this is one hell of a trip down the memory lane.

The initiative is a perfect fit for the project’s agenda to implement support for as many legacy file formats as possible (see our earlier interview with Fridrich Strba et al.), although the timing is a bit of a puzzle.

History lessons

QuakXPress was once the king of desktop publishing, with reported 95% of the market share at its highest point. But corporate greed, overconfidence, and lack of vision pretty much killed it in early 2000s, and Adobe InDesign nailed its coffin.

A typical comment to the Ars article (linked above) on the subject looks like this:

We hated Quark, the program and the company. But of course we used it because it was ubiquitous. InDesign 1.0 wasn’t great, but we were so desperate to move away from Quark that we slowly converted.

From many discussions on the web regarding Quark and Adobe it looks like QXP users mostly got their closure in 2003—2004, when Adobe’s Creative Suite 2 arrived and settled in, although some sticked with Quark’s software through v5 and v6.

Ever since Adobe introduced subscription-based model in 2013, there’s a somewhat popular notion that Adobe is the new Quark and it’s on the road to failure. However, after initial setback in 2013 and 2014, the company’s financials have been steadily growing, in terms of both revenue, net profit, and net income. And since introduction of Creative Cloud in May 2013, Adobe’s stock price is up by ca. 230%. So it looks like they need to try harder to fail.

Although Quark has been trying to bring back the former market share by any means deemed necessary, they haven’t been very successful. The company eventually refocused on automating content creation, management, publishing, and delivery. There are very few businesses around that still run once popular QuarkXPress, let alone the versions from 15—20 years ago which DLP focused on. Which brings us back to the actual topic at hand.

What’s in libqxp 0.0.0

The newly released first version of the library is the result of several months of work by Aleksas Pantechovskis, a student from Lithuania, who participated in the Google Summer of Code program this year (again).

Aleksas already had good track record with the Document Liberation Project. Last year, he wrote libzmf, a library for importing Zoner Callisto/Draw v4 and v5 documents.

In this initial release the libqxp library reads:

  • pages and facing pages;
  • boxes (rectangles, ellipses, Bezier);
  • lines, Bezier curves;
  • text objects, including linked text boxes and text on path;
  • font, font faces, size, alignment, paragraph rules, leading, tabs, underline, outline, shadow, subscript, superscript, caps etc.;
  • colors (including shades), gradients (linear, radial, rectangular);
  • line/frame color, width, line caps and corners, arrows, dashes;
  • object groups;
  • rotation.

Some rather important features like custom kerning and tracking aren’t yet supported, because OpenDocument file format doesn’t support those. But that’s not much of an issue, according to Aleksas:

librevenge is just interfaces, so if there is another output generation lib instead of libodfgen for format that supports them, then it can use any attributes passed to it.

One big missing part in this release is support for image objects, because, Aleksas says, the picture format seems to be quite complicated.

Development of libqxp sits on top of reverse-engineering work started by Valek Filippov in OLE Toy in 2013 and continued by David Tardon and Aleksas in February 2017. Although libqxp sticks to ancient versions of QuarkXPress for now, OLE Toy can parse some of the data in QXP v6 and v8 (it’s encrypted since v5), so this might change in the future.

LibreOffice has already been patched to open QXP files, this feature will be available in v6.0 (expected in early 2018). The library itself ships with the usual SVG converter which you are likely to find of limited use. Also, if all you need is extracting text, there’s a perfectly sensible qxp2text converter as well.

Support in Scribus

One would rightfully expect Scribus to be the primary beneficiary from libqxp. But here is some background info.

First of all, the history between Quark and Scribus is rather hairy.

Initially, Scribus was pretty much modeled after QuarkXPress, and the two projects still share some similarities. Early in the history of Scribus, it made a lot of sense to introduce support for QXP files. Users got mad with Quark’s continuous quirks and bad user support, they would jump ship at the very next opportunity.

Paul Johnson, former Scribus contributor, actually started working on support for QXP files in 2004. But after he had posted to a public mailing list about his progress, he reportedly received a cease and desist letter from Quark.

Scribus was nowhere near its current fame at the time, and even now it would not be able to handle legal expenses (save for a theoretical FSF intervention). Back then Paul just stopped working on that project.

Quark didn’t quit monitoring Scribus though and continued tracking the progress of the project to the point where developers jokingly discussed blocking Quark’s IP addresses range from accessing Scribus’s source code repository (they reportedly had logs of that). Eventually Quark turned their attention towards more pressing matters like losing their market share to Adobe.

Today, much like LibreOffice, Scribus supports both ubiquitous file formats like IDML and bizarre ones like those by Calamus and Viva Designer. It even has support for Quark’s XTG files. Getting a QXP importer would also perfectly fit Scribus’s narrative.

The team is well aware of the libqxp project, they already have experience writing librevenge-based importers for Corel DRAW, Microsoft Publisher, Macromedia FreeHand etc. So it’s likely just a matter of time till they introduce QuarkXPress importer.

Is there any closure left to get?

Read More

Inkscape hackfest planned for late June in Paris

Following productive hackfests in 2015 and 2016, the Inkscape team is meeting in Paris later this month for another hackfest. The event is taking place on June 27th through July 1st inside Paris’s modern science museum, Cité des sciences et de l’industrie.

(Not quite) coincidentally, the venue is exactly where in 2008 part of the original documentation team met for the first time to work on the official user manual.

So far the hackfest agenda seems to cover many topics from the official roadmap for the next major update of Inkscape: GTK+3 port, coordinate system flip, making C++11 compiler a requirement, splitting less-maintained extensions into an extra package, improving performance. Which is another reminder that should the team stick to the plan, they will need all the help they can get to prepare the next release in a sensible amount of time.

The attending team members are core team developers like Tavmjong Bah, Martin Owens, and Jabier Arraiza, as well as contributors like C Rogers, Cédric Gemy, and Elisa de Castro Guerra. Apart from programming sessions there’s a community meet-up planned for Saturday, July 1st.

The team is currently revamping the project’s infrastructure. Most recently they moved to Gitlab for source code hosting and bug tracking, marking a departure from Canonical’s Launchpad and Bazaar.

Read More

Krita To Kickstart New Text And Vector Tools

Krita Foundation announced their third Kickstarter project to fund development of new text and vector tools. With the proposed features, the team aims to improve the user experience for, among others, comic book and webcomic artists.

Essentially, the team will ditch the Text tool inherited from Calligra Suite and create an easier-to-use UI for managing text and its styling, improve RTL and complex scripts support (think CJK, Devanagari), add text on path editing, non-destructive bending and distortion of text items etc.

Additionally, they will completely switch to SVG as an internal storage format for vector graphics and improve usability of related editing tools.

There are also 24 stretch goals: from composition guides to reference image docker improvements to LUT baking. In all likeliness we are going to see at least some of the stretch goals done: it was the case for both past Kickstarter campaigns, and after the first two days this new campaign is already ca. 30% funded.

As usual, LGW asked project leader Boudewijn Rempt some technical questions about the development plans within the campaign.

Given the focus on text and vector tools, how many bits of Calligra Suite does Krita still share with the original project?

There is nothing shared anymore: the libraries that we used to share have been forked, so Calligra and Krita have separate and, by now, very different versions of those libraries. That was a really tough decision, but in the end we all realized that office and art applications are just too different.

So, we’ll probably drop all the OpenDocument loading and saving code in favor of SVG, with just an OpenDocument to SVG converter for compatibility with old KRA files.

We’ll implement a completely new text tool and drop the old text tools and its libraries. As for the vector tools, we’ll keep most of that code, since it is already half-ported to SVG, but we’ll rework the tools to work better in the context of Krita.

How far do you think Krita should go in terms of vector tools? I’m guessing, you wouldn’t want duplicating Karbon/Inkscape. But importing/exporting (EPS, AI, PDF, CDR etc.), boolean operations on paths, masks and clipping paths, groups, and suchlike?

For import/export, only SVG. And the functionality we want to implement first is what’s really important for artists: it must support the main thing, the raster art. So, things like vector based speech balloons for comics, or decorative borders for trading cards or some kinds of effects. Boolean ops on paths are really import for comic book frames, for instance.

Regarding text direction flow and OpenType features: how much do Qt and Harfbuzz provide for Krita already, and how much (and what exactly) do you need to write from scratch?

Qt’s text layout is a bit limited, it doesn’t do top-to-bottom for Japanese, for instance. So likely we’ll have to write our own layout engine, but we’ll be using harfbuzz for the glyph shaper.

Do you think it’s faster/easier to write and maintain your own engine than to patch Qt?

Well, they serve different purposes: Qt’s layout engine is general purpose and mostly meant for things like the text editor widget or QML labels. We want things like automatic semi-random font substitution that places glyps from different fonts so we can have a better imitation of hand-lettered text, for instance. How far we’ll be able to this is a bit of an adventure!

Some specifics of the proposed implementation make it look like you would slightly extend SVG. Is that correct?

Well, first we’ll look at what SVG2 proposes and see if that’s enough, then we’ll check what Inkscape is doing, and if we still need more flexibility, we’ll start working on extending SVG with our own namespace.

For vectors, I don’t think that will be necessary, but it might be necessary for text. If the kickstarter gets funded, I suspect I’ll be mailing Tavmjong Bah a lot!

Stretch goals cover all aspects of Krita: composition, game art, deep painting, general workflow improvements. How did you compile the list?

This January, we had a sprint in Deventer with some developers and some artists (Dmitry, me, beelzy, wolthera), where we went through all the wish bugs and feature requests and classified them. That gave us a big list of wishes of stretch goal size. Then later on, Timothée, Wolthera, Irina, and me sat down and compiled a list that felt balanced: some things that almost made it last years, some new things, bigger things, smaller things, something for every user.

One of the stretch goals is audio import for animation sequences. How far are you willing to go there? Just the basics, or do you see things like lipsync happen in the future?

Just the basics: we discussed this with the animators in our community, and lipsyncing just isn’t that much of a priority for them. It’s more having the music and the movement next to each other.

But that suggests multiple audio objects on the timeline, or would it be just a single track preprocessed in something like Ardour?

For now, a single track!

Read More

darktable 2.0 released with printing support

Darktable, free RAW processing software for Linux and Mac, got a major update just in time for your festive season.

The most visible new feature is the print module that uses CUPS. Printing is completely color-managed, you can tweak positions of images on paper etc. All the basics are in place.

print module in darktable

The nice “perk” of this new feature is exporting to PDF in the export module.

The other important change is improved color management support. The darkroom mode now features handy toggles for softproofing and gamut check below the viewport (darktable uses a cyan color to fill out of gamut areas). Additionally, thumbnails are properly color-managed now.

Something I personally consider a major improvement in terms of getting darktable to work out of box nicely is that the viewport is finally automatically sized. No longer you need to go through the trial-and-error routine to set it up in the preferences dialog. It just works. Moreover, mipmap cache has been replaced with thumbnail cache which makes a huge difference. Everything is really a lot faster.

film grain added in darktable

If you care about losing your data (of course you do), darktable 2.0 finally supports deleting images to system trash (where available).

The port to Gtk+3 widget set is yet another major change that you might or might not care about much. It’s mostly to bring darktable up to date with recent changes in Gtk+ and simplify support for HiDPI displays (think Retina, 4K, 5K etc.)

The new version features just two additional image processing modules:

  • Color reconstruction attempts to restore useful data from overexposed areas in your photos.
  • Raw black/white point module is pretty much an internal feature that the team hopes you never ever touch (of course you will). It was a prerequisite step towards dual-ISO support and better denoising.

Other existing modules got all sort of tweaks and updates. Most notably, deflicker from Magic Lantern was added to the exposure module.

Additionally, the watermark module features a simple-text.svg template now, so that you could apply a configurable text line to your photos. Which means that with a frame plugin and two instances of watermark you can use darktable for the most despicable reason ever:

making a meme in darktable

The most important changes in Lua scripting is that script can add buttons, sliders, and other user interface widgets to the lighttable view. To, the team started a new repository for scripts on Github.

Finally, the usual part of every release: updates in the camera support:

  • Base curves for 8 more cameras by Canon, Olympus, Panasonic, and Sony.
  • White balance presets for 30 new cameras by Canons, Panasonic, Pentax, and Sony.
  • Noise profiles for 16 more cameras by Canon, Fujifilm, Nikon, Olympus, Panasonic, Pentax, and Sony.

For a more complete list of changes please refer to the release announcement. Robert Hutton also shot a nice video covering most important changes in this version of darktable:

LGW spoke to Johannes Hanika, Tobias Ellinghaus, Roman Lebedev, and Jeremy Rosen.

Changes in v2.0 could be summarized as one major new feature (printing) and lots of both under-the-hood and user interaction changes (Gtk+3 port, keyboard shortcuts etc.). All in all, it’s more of a gradual improvement of the existing features. Is this mostly because of the time and efforts that the Gtk+3 port took? Or would you say that you are now at the stage where the feature set is pretty much settled?

Tobias: That’s a tough question. The main reason was surely that the Gtk+3 port took some time. Secondly, the main motivation for most of us is scratching our itches, and I guess that most of the major ones are scratched by now. That doesn’t mean that we have no more ideas what we’d like to see changed or added, but at least most low-hanging fruits are picked, so everything new takes more time and effort than big changes done in the past.

Roman: The Gtk+3 port, as it seems, was the thing that got me initially involved with the project. On its own, just the port (i.e. rewriting all the necessary things, and making it compile and mostly be functional) did not took too long, no more than a week, and was finished even before previous release happened (v1.6 that is). But it was the stabilization work, i.e. fixing all those small things that are hard to notice, but are irritating and make bad user experience that took a while.

Johannes: As far as I’m concerned, yes, darktable is feature complete. The under-the-hood changes are also pretty far-reaching and another reason why we call it 2.0.0. The Gtk+3/GUI part is of course the most visible and the one you can most easily summarize.

Jeremy: I’d like to emphasis the “under the hood” part. We did rewrite all our cache management, and that’s a pretty complicated part of our infrastructure. I don’t think this cycle was slow, it’s just that most of it is infrastructure work needed if we want darktable’s visible feature set to grow in the future…

color balance adjusted in darktable

Darktable seems to be following the general industry trend where software for processing RAW images becomes self-sustained, with non-destructive local editing features such as a clone tool, as well as sophisticated selection and masking features. In the past, I’ve seen you talking about not trying to make a general-purpose image editor out of darktable, but these features just seem to crawl in no matter what, you are even considering adding a Liquify-like tool made by a contributor. Would you say that your project vision has substantially changed in the past? How would you define it now?

Tobias: I don’t see too many general image manipulation features creeping in. We have masks since a while, and the liquify/warping thing would be another one, but besides that I don’t see anything. There is also the question where to draw the line. Is everything besides global filters (exposure, levels, …) already a step towards a general purpose editor? Are masks the line being crossed? I don’t know for sure, but for me it’s mostly pushing individual pixels, working with layers, merging several images. We do none of those and I hope we never will.

Johannes: I think this is caused by how darktable is governed. It’s very much driven by the needs of individual developers, and we’re very open when it comes to accepting the work of motivated contributors. we have a large dev basis, so I guess it was just a matter of time until someone felt the need for this or that and just went ahead and implemented it. I guess you could say we weren’t consequent enough in rejecting patches, but so far I don’t think this strategy has hurt us much. To the contrary, it helps to foster a large community of motivated developers.

HDR merging does exist though, and there’s even a feature request to add manual/automatic alignment. And both duplication and configurable blending of processing modules are a lot like working with layers, even though the processing pipeline is fixed.

Tobias: Yes, but that doesn’t counter my point: Editing single pixels is out of context, general calculations like that fit.

Johannes: To give a very specific answer to this very specific question: the HDR merging works on pre-demosaic raw data (which is why we have it, it’s substantially simpler than/different to other tools except Wenzel’s hdrmerge which came after IIRC). So automatic alignment is not possible (or even manual for that matter).

exposure adjusted in darktable

Have you already defined any major milestones for future development?

Tobias: No. Version 2.0 had the predefined milestone “Gtk+3 port”, but that was an exception. Normally we start working on things we like, new features pile up and at some point we say “hey, that looks cool already, and we didn’t have a release for a while, let’s stabilize and get this to the users”. There is a lot less planning involved than many might think.

Roman: As Tobias said, there are rarely pre-defined milestones. It is more like, someone has some cool idea, or needs some functionality that is not there yet, and he has time to implement it.

Personally, I have been working on image operation for highlight reconstruction via inpainting. There are several of them already in darktable, but frankly, currently that is the one of important features that are still not completely handled by darktable.

There has been a lot of preparatory work under-the-hood over the last two releases, which now opened possibility for some interesting things, say native support for Magic Lantern’s Dual ISO, or new version of our profiled denoise image operation.

I’m also looking into adding yet another process() function to image operations, that would not use any intrinsic instructions, but OpenMP SIMD only, and thus, making darktable to not have any hard dependency on x86 processors, i.e. it could work on ARM64 too.

Jeremy: I would like to add the manipulation of actual image parameters to Lua, that is a big chunk of work. Apart from that it will mainly depend on what people do/want to do.

What kind of impact on users’ workflows do you think the adding of Lua scripting has done so far? What are the most interesting things you’ve seen people do with Lua scripting in darktable?

Tobias: Good question. We slowly added Lua support since 1.4, but only now we start to get to a point where more advanced features can be done. In the future I can see quite some fancy scripts being written that people can just use instead of everyone coding the same helpers over and over again. That’s also the motivation for our Lua scripts repository on GitHub. While there are some official scripts, i.e., mostly written and maintained by Jeremy and me, we want them to be seen as an extension to the Lua documentation, so that others can get ideas how to use our Lua API.

The results of that can be seen in the ‘contrib’ directory. The examples there range from background music for darktable’s slideshows to a hook that uses ‘mencoder’ to assemble timelapses. We hope to see many more contributions in the future.

Jeremy: Lua was added mainly for users that have a specific workflow that goes against the most common workflow. Darktable will follow the most common workflow, but Lua will allow other users to adapt DT to their specific need.

That being said, I agree with Tobias that Lua in 1.6 was still missing some bricks to make it really useful. Without the possibility to add widgets (buttons, sliders etc.) to darktable, it was impossible to make a script that was really useable without technical knowledge.

With the Lua repository and the possibility to find widgets, things should go crazy really fast. Did you know that you can remote-control darktable via d-bus by sending Lua commands?

white balance adjusted in darktable

In early days of darktable quite a few features (e.g. wavelet-based) came directly from papers published at SIGGRAPH etc. What’s your relationship with the academic world these days?

Tobias: We didn’t add many new image operations recently, and those that got added were mostly not that sophisticated that we had to take the ideas from papers. That doesn’t mean that our link to the academic world was dropped, Johannes is still working as a researcher in university, and when new papers come out we might think about implementing something new, too.

Johannes: Yes, as Tobias says. But then again graphics research is my profession, and darktable is for fun. No, seriously, the last few siggraphs didn’t have any papers that seemed a good fit for implementation in darktable to me.

Several years ago you switched to rawspeed library by Klaus Post from the Rawstudio project. Now it looks like darktable is the primary “user” of rawspeed, and your own Pedro Côrte-Real is 2nd most active contributor to the library. Doesn’t it feel at least a tiny bit weird? ūüėČ

Tobias: I think it’s a great example of how open source software can benefit from each other. I’m not sure if that’s weird or just a bit funny.

How has your relationship with the Magic Lantern project been evolving, given the deflicker feature etc.?

Tobias: The deflicker code wasn’t so much contributed by the Magic Lantern folks but written by Roman with inspiration from how magic lantern does it. I don’t know if he used any code from them, maybe he can clarify. Apart from deflicker there are also plans to support their dual-iso feature natively.

Roman: The only direct contribution from Magic Lantern project was the highlight reconstruction algorithm that made it into v1.6. The deflicker was implemented by me, as it usually happens, after I needed a way to auto-expose lots of images, and found no way to do it. That being said, it uses exactly the same math as deflick.mo does.

Tobias: Even that was not taking code from them. Jo wrote it after talking with Alex at LGM.

Johannes: But it was most inspiring meeting those folks in person. And yes, I was a lazy ass implementing this dual-iso support natively in darktable ever since LGM.

Darktable seems to be doing pretty well without any kind of community funding which is all the rage these days. What do you think are the causes to that effect?

Tobias: Well, we’d need some legal entity that takes care of taxes. And to be honest, we don’t need that much money. Our server is sponsored by a nice guy and there are no other expenses. Instead we have been asking our users to donate to LGM for several years now and from what we can see that helped a lot.

As for why we have been doing so well, no idea. Maybe because we are doing what we want without caring if anyone would like it. To the best of our knowledge darktable has exactly 17 users (that number is measured with the scientific method of pulling it out of thin air), so whatever we do, we can lose at most those few. Nothing to worry about.


The new version of darktable is available as source code and a .dmg for Mac OS X. Builds for various Linux distributions have either already landed or are pending.

Read More

GIMP 2.9.2 Released, How About Features Trivia?

In a surge of long overdue updates the GIMP team made the first public release in the 2.9.x series. It’s completely GEGL-based, has 16/32-bit per channel editing and new tools. It’s also surprisingly stable enough even for the faint of heart.

Obligatory disclaimer: I’m currently affiliated with upstream GIMP project. Please keep that in mind when you think you stumbled upon biased opinion and thought you’d call LGW out.

One might expect a detailed review here, which totally makes sense, however writing two similar texts for both upstream GIMP project and LGW would seem unwise. So there: the news post at GIMP.org briefly covers most angles of this release, while this article focuses on features trivia and possible areas of contribution.

The GEGL port and HDR

Originally launched in 2000 by a couple of developers from Rhythm & Hues visual effects studio, the GEGL project didn’t have it easy. It took 7 years to get it to GIMP at all, then another 8 years to power all of GIMP.

So naturally, after years and years (and years) of waiting the very first thing people would be checking in GIMP 2.9.2 is this:

First and foremost, 64-bit is there mostly for show right now, although GIMP will open and export 64-bit FITS files, should you find any.

That said, you can use GIMP 2.9.2 to open a 32-bit float OpenEXR file, adjust color curves, apply filters, then overwrite that OpenEXR file or export it under a different name. Job done.

The same applies to PNG, TIFF, and PSD files: respective plugins have been updated to support 16/32-bit per channel data to make high bit depth support actually useful even for beta testers.

All retouching and color adjustment tools, as well as most, if not all plugins are functional in 16/32-bit modes. There’s also basic loading and exporting of OpenEXR files available (no layers, no fancy features from v2.0).

GIMP also provides several tonemapping operators via the GEGL tool, should you want to go back to low dynamic range imaging.

Mantiuk06 tonemapping operation

There are, however, at least two major features in GEGL that are not yet exposed in GIMP:

  • RGBE (.hdr) loading and exporting;
  • basic HDR merging from exposure stacks.

This is one of the areas where an interested developer could make a useful contribution at a low enough price in the time currency.

In particular, adding a GEGL-based HDR merge tool to GIMP should be easier now thanks to a widget for using multiple inputs to one GEGL operation (which would be exp-combine).

GEGL operations

Currently 57 GIMP plugins are listed as completely ported to become GEGL operations, and 27 more ports are listed as work in progress. That leaves 37 more plugins to port, so the majority of the work appears to be done.

Additionally, GEGL features over 50 original filters, although some of them are currently blacklisted, because they need to be completed. Also, some of the new operations were written to implement certain features in GIMP tools. E.g. the Distance Map operation is used by the Blend tool for the Shape Burst mode, and both matting operations (Global and Levin) are used by the Foreground Select tool to provide mask generation with subpixel precision (think hair and other thin objects).

Various new operations exposed in GIMP, like Exposure (located in the Colors menu) and High Pass (available via the GEGL tool), are quite handy in photography workflows.

Note that if you are used to “Mono” switch in the Channel Mixer dialog, this desaturation method is now available through a dedicated Mono Mixer operation (Colors->Desaturate submenu). It might take some getting used to.

Mono Mixer

It’s also worth mentioning that 41 of both ports and original GEGL operations have OpenCL versions, so they can run on a GPU.

And while immensely popular external G’MIC plugin is not going to become GEGL operation any time soon (most likely, ever), since recently it’s ready to be used in conjunction with GIMP 2.9.x in any precision mode.

There are some technical aspects about GIMP filters and GEGL operations in GIMP 2.9.x that you might want to know as well.

First of all, some plugins have only been ported to use GEGL buffers, while others have become full-blown GEGL operations. In terms of programming time, the former is far cheaper than the latter, so why go the extra mile, when GIMP 2.10 is long overdue, and time could be spent wiser?

Softglow

Porting plugins to use GEGL buffers simply means that a filter can operate on whatever image data you throw it at, be it 8bit integer or 32-bit per color channel floating point. Which is great, because e.g. Photoshop CS2 users who tried 32-bit mode quickly learnt they couldn’t do quite a lot, until at least CS4, released several years later.

The downside of this comparatively cheap approach is that in the future non-destructive GIMP these filters would be sad destructive remnants of the past. They would take bitmap data from a buffer node in the composition tree and overwrite it directly, so you would not be able to tweak their settings at a later time.

So the long-term goal is still to move as much as possible to GEGL. And that comes at a price.

First of all, you would have to rewrite the code in a slightly different manner. Then you would have to take an extra step and write some special UI in GIMP for newly created GEGL op. The reason?

While the GEGL tool skeleton is nice for operations with maybe half a dozen of settings (see the Softglow filter screenshot above), using something like automatically generated UI for e.g. Fractal Explorer would soon get you to lose your cool:

Old vs. new Fractal Explorer

The good news is that writing custom UIs is not particularly difficult, and there are examples to learn from, such as the Diffraction Patterns op:

Diffraction Patterns operation

As you can see, it looks like the former plugin with tabbed UI and it has all the benefits of being a GEGL operation, such as on-canvas preview, named presets, and, of course, being future-proof for non-destructive workflows.

FFmpeg support in GEGL

If you have already read the changelog for the two latest releases of GEGL, chances are that you are slightly puzzled about FFmpeg support. What would GEGL need it for? Well, there’s some history involved.

Øyvind Kolås started working on GEGL ca. 10 years ago by creating its smaller fork called gggl and using it for a video compositor/editor called Bauxite. That’s why GEGL has FFmpeg support in the first place.

Recently Øyvind was sponsored by The Grid to revive ff:load and ff:save operations. These ops drive the development of the iconographer project and add video capabilities to The Grid’s artificial intelligence based automatic website generator.

The FFmpeg-based loading and saving of frames could also come in handy for the GIMP Animation Package project, should it receive much needed revamp. At the very least, they would simplify loading frames from video files into GIMP.

New Tools

The new version has 6 new tools—2 stable, 4 experimental. Here’s some trivia you might want to know.

GIMP is typically referred to as a tool that falls behind Photoshop. Opinions of critics differ: some say it’s like Photoshop v5, others graciously upgrade it all the way to a CS2 equivalent.

If you’ve been following the project for a while, you probably know that, anecdotally, the Liquid Rescale plugin was made available a year ahead of Photoshop CS5 Extended. And you probably know that Resynthesizer made inpainting available in GIMP a decade before Content-Aware Fill made its way to Photoshop CS6:

But there’s more. One of the most interesting new features in GIMP 2.9.2 is the Warp Transform tool written by Michael Muré back in 2011 during Google Summer of Code 2011 program.

It’s the interactive on-canvas version of the venerable iWarp plugin that looked very much like a poor copy of Photoshop’s Liquify filter. Except it was introduced to GIMP in 1997, while Liquify first appeared in Photoshop 6, released in 2000.

Warp Transform reproduces all features of the original plugin, including animation via layers, and adds sorely missing Erase mode that’s designed to selectively retract some of the deformations you added. The mode isn’t yet functioning correctly, so you won’t restore original data to its original pixel crisp state, but there are a few more 2.9.x releases ahead to take care of that.

Unified Transform tool is a great example of how much an interested developer can do, if he/she is persistent.

Originally, merging Rotate, Scale, Shear, and Perspective tools into a single one was roughly scheduled for version 3.6. This would prove to be challenging, what with the Sun having exploded by the time and the Earth being a scorched piece of rock rushing through space, with a bunch of partying water bears on its back.

But Mikael Magnusson decided he’d give it a shot out of curiosity. When the team discovered that he had already done a good chunk of the work, he was invited to participate at Google Summer of Code 2012 program, where he completed this work.

Unfortunately, it’s also an example of how much the GEGL port delayed getting cool new features into the hands of benevolent, if slightly irritated masses.

Internal Search System

Over the years GIMP has amassed so many features that locating them can be a bit overwhelming for new users. One way to deal with this is to review the menu structure, plugin names and their tooltips in the menu etc., maybe cut most bizarre ones and move them into some sort of an ‘extras’ project.

Srihari Sriraman came up with a different solution: he implemented an internal search system. The system, accessible via Help->Search and Run a Command, reads names of menu items and their descriptions and tries to find a match for a keyword that you specified in the search window.

Searching action in GIMP

As you can see, it does find irrelevant messages, because some tooltips provide an overly technical explanation (unsharp mask uses blurring internally to sharpen, and the tooltip says so, hence the match). This could eventually lead to some search optimization of tooltips.

Color Management

The news post at gimp.org casually mentions completely rewritten color management plugin in GIMP. What it actually means is that Michael Natterer postponed the 2.9.2 release in April (originally planned to coincide with Libre Graphics Meeting 2015) and focused on rewriting the code for the next half a year.

The old color management plugin has been completely removed. Instead libgimpcolor, one of GIMP’s internal libraries, got new API for accessing ICC profile data, color space conversions etc.

Since GIMP reads and writes OpenEXR files now, it seems obvious that GIMP should support ACES via OpenColorIO, much like Blender and Krita. This has been only briefly discussed by the team so far, and the agreement is that a patch would be accepted for review. So someone needs to sit down and write the code.

What about CMYK?

Speaking of color, nearly every time there’s a new GIMP release, even if it’s just a minor bugfix update, someone asks, whether CMYK support was added. This topic is now covered in the new FAQ at gimp.org, but there’s one more tiny clarification to make.

Since autumn 2014, GEGL has an experimental (and thus not built by default) operation called Ink Simulator. It’s what one might call a prerequisite for implementing full CMYK support (actually, separation into an arbitrary amount of plates) in GIMP. While the team gives this task a low priority (see the FAQ for explanation), this operation is a good start for someone interested to work on CMYK in GIMP.

Digital Painting

Changes to the native brush engine in GIMP are minor in the 2.9.x series due to Alexia’s maternity leave. Even so, painting tools got Hardness and Force sliders, as well as the optional locking of brush size to zoom.

Somewhat unexpectedly, most other changes in the painting department stem indirectly from the GIMP Painter fork by sigtech. The team evaluated various improvements in the fork and reimplemented them in the upstream GIMP project.

Canvas rotation and flipping

Canvas rotation and horizontal flipping. Featuring artwork by Evelyne Schulz.

Interestingly, while most of those new features might look major to painters, they actually turned out to be a low-hanging fruit in terms of programming efforts. Most bits had already been in place, hence GIMP 2.9.2 features canvas rotation and flipping, as well as an automatically generated palette of recently used colors.

Another new feature is an experimental support for MyPaint Brush engine. This is another idea from the GIMP Painter fork. The implementation is cleaner in programming terms, but is quite incomplete and needs serious work before the new brush tool can be enabled by default.

MyPaint Brush tool

Some Takeaways For Casual Observers and Potential Contributors

As seen in recently released GIMP 2.9.2, the upcoming v2.10 is going to be a massive improvement with highlights such as:

  • high bit depth support (16/32-bit per channel);
  • on-canvas preview for filters;
  • OpenEXR support;
  • better transformation tools;
  • new digital painting features;
  • fully functional color management;
  • improved file formats support.

Much of what could be said about the development pace in the GIMP project has already been extensively covered in a recent editorial.

To reiterate, a lot of anticipated new features are blocked by the lack of GIMP 2.10 (complete GEGL port) and GIMP 3.0 (GTK+3 port) releases. There are not enough human resources to speed it up, and available developers are not crowdfundable due to existing work and family commitments.

However, for interested contributors there are ways to improve both GIMP and GEGL without getting frustrated by the lack of releases featuring their work. Some of them have been outlined above, here are a few more:

  • Create new apps that use GEGL (example: GNOME Photos).
  • Port more GIMP filters to GEGL or create entirely new GEGL operations (both would be almost immediately available to users).
  • Create OpenCL versions of GEGL operations.

All of these contributions will directly or indirectly improve GIMP.

With that—thanks for reading!

Read More

Afanasy Render Farm Manager Gets Natron Support

Timur Hairulin released an update of his free/libre CGRU render farm management tools.

The newly arrived version of CGRU features support for Natron, free/libre VFX compositing and animation software, and Fusion, one of its proprietary counterparts, by Blackmagic Design.

Timur has great hopes for Natron:

I still haven’t used it in production yet, because it needs to become more stable first. Once it’s done, getting an artist to use Natron should be easy. After all, it looks and behaves a lot like Nuke. Besides, it has a great Python API. For instance, I don’t need to create gizmo’s on TCL like in Nuke.

Once you install CGRU, you will find CGRU’s Natron plugins in the cgru/plugins/natron folder. You should add this path to the NATRON_PLUGIN_PATH environment variable. This will make Afanasy node available in Natron. Further documentation is available at project’s website.

Support for Fusion was added by Mikhail Korovyansky. He tested it on v7, but v8 should be supported as well.

Additionally, Keeper now allows quickly changing local render user name, and rules now allow player linking to the current frame.

Given already existing support for Blender in CGRU, getting a complete libre-based studio solution should be closer now.

CGRU 2.0.7 with Natron and Fusion support are available for downloading for both Linux and Mac OS X users.

Read More