Tag Archives: news

Inkscape to organize its first community-funded hackfest

The Inkscape team is raising funds to organize a 3 days long hackfest in Toronto this April, right before the annual Libre Graphics Meeting event.

The idea is the get ca. 10-12 Inkscape contributors into a single room and let them plan their further work on Inkscape, do actual programming, and, ultimately, make Inkscape better.

The Inkscape Board has already decided to use $10,000 from the project donations’ fund to cover travel/accommodation expenses for both hackathon and participation at Libre Graphics Meeting, but more money is likely to be required as not every team member lives in Northern America.

Why is this fundraiser a big deal?

Despite of meeting each other at LGM every other year or so, team members haven’t had proper quality hacking time together in a face-to-face fashion, ever. And while videoconferencing might help, it’s not all it’s cracked up to be.

So who’s coming, and what’s in it for the community?

The likely participants so far are:

Martin Owens. He’s been getting increasingly involved with the project over the last several years, doing all kinds of work, from adding new handy features and fixing bugs to programming the new website.

Tavmjong Bah. For years Tav had been orbiting the project as creator of A Guide to Inkscape — the reference for Inkscape users. Eventually he started doing programming, mostly to improve SVG1.1 and SVG2 compatibility, and then he became Inkscape’s ambassador in the SVG Working Group where he makes sure that SVG provides features that are in demand by illustrators. Read his blog post for details on a recent SVG Working Group Meeting in Sydney.

Jabiertxo Arraiza Cenoz. He joined the project only last year or so, but he’s likely to become the next Inkscape superstar due to his work on live path effects, most of which you will be able to make use of, when v0.92 is released. If you ever wanted fillet/chamfer tool in Inkscape or had the feeling that Spiro curves should be visualized as you draw them, you absolutely want him at the hackfest, because it’s what he already did. Imagine what else he can come up with!

Bryce Harrington. One of the founding members of Inkscape, currently doing mostly boring organizational work that, nevertheless, has to be done to keep the project’s gears rotating smoothly.

Joshua Andler. He is one of Day 1 Inkscape users. Apart from being another Inkscape Board member, he’s been organizing Inkscape booths at SCALE (Southern California Linux Expo) since what feels like the dawn of times.

The agenda of the hackfest is subject to changes, but here are some rough ideas that will be taken into consideration:

  • roadmap planning, how new major releases can be cut faster;
  • early start on redesigning the extensions system;
  • looking at what can be done to improve print-ready output (CMYK, spot colors);
  • various usability improvements.

The actual agenda will become more definite towards the beginning of the hackfest, when the team has a better understanding, who exactly is coming, and what things these people are interested to work on.

The idea is to make Inkscape hackfests a common way to speed up development. But since getting people from around the world together is not exactly cheap, this is where your support will play a major role.

Sounds advertising? Go ahead and donate to help organizing the first Inkscape hackfest.

Read More

INRIX Traffic Data Integrated into First Ubuntu Smartphone

The Ubuntu Phone from BqUbuntu for Phones may not quite have turn-by-turn navigation smarts on hand, but it does boast some real-time traffic data thanks to a partnership with INRIX.  The company will provide traffic, incident and parking reports for the world’s first Ubuntu smartphone, helping users stay up-to-date with planned routes. “By providing incident updates, parking location and availability information […]

The post INRIX Traffic Data Integrated into First Ubuntu Smartphone first appeared on OMG! Ubuntu!.

Read More

Talks submission deadline for LGM2015 extended

Have you developed an interesting free software project for graphic designers, photographers or 3D artists? Are you a creative professional who’s willing to do a workshop on using GIMP, Krita, Inkscape, Blender, and other free tools? LGM 2015 is still accepting proposals for it’s anniversary 10th event on April 29 – May 2, 2015, in Toronto.

The submission deadline has just been extended to February 13, so if you missed the first one, there’s still time to sit down, check your schedule, and write a nice proposal.

This year the event is organized by ginger coons, chief editor of Libre Graphics Magazine (Issue 2.3 was released during FOSDEM last weekend, by the way), and Amy Ratelle from the University of Toronto (the venue).

By the way, if you are in the video editing line of work or even open hardware (as long as it has creative applications) rather than graphic design etc., the program committee will review your proposal anyway.

For details about submitting a proposal check the Call for Participation page.

And if you are willing to support free software developers and educators traveling to the event to meet users and fellow developers and make better software, there’s an ongoing Pledgie campaign.

Read More

Elog.io: give credit to creators or die trying

The Commons Machinery project is taking the next step towards bringing order into the arcane world of media files metadata on the web. With Elog.io, they aim to unclutter the chaos somewhat and help credit creators. It’s an uphill battle, but the good news is: you can help.

After all these years preserving metadata in media files and properly crediting authors is still a major issue on the Internet. Image editing software could be smarter, and popular social networks could stop wiping credits and descriptions from pictures (let along claim they own them). It’s a huge mess, and we are in it up to our necks.

The Commons Machinery project was launched in 2013 to tackle the problem of fixing this unhealthy state of affairs. The project was backed by the Shuttleworth Foundation early on.

By the end of 2013, they had enough proof-of-concept code to make an impression that this bold enterprise was for real, and the suggested solution was, in fact, just about doable. At least for public domain content and the content available under any of Creative Commons licenses.

At the time they had rough, yet working preliminary patches for Inkscape, GIMP, and Aloha HTML editor, as well as extensions for Mozilla Firefox and LibreOffice.

However, eventually they quit this leg of the race and refocused on Elog.io, a web service that takes “fingerprints” off knowingly public domain and CC-licensed works of art and matches user-submitted media files against its database of such fingerprints.

Elog.io will help you, should your fancies take you there, to discover that a terrifying painting of a handsome young woman cutting the head off a bearded man at the advice of some old scoundrel is, in fact, “Judith Beheading Holofernes” by Caravaggio.

As of November 1024, Elog.io had taken digital fingerprints off ca. 23 million photographs from Wikimedia Commons. That is to say, you should not expect the service to help you discovering the true origin of a photo you saw on Facebook where a humorous cat is taking a teddy bear to the cleaners. At least not just yet. But old art, photos from public archives etc. are definitely in there already.

With extensions for Google Chrome and Mozilla Firefox to help you launch the matching of images in your brower window against Elog.io’s database, the team, now comprised of Jonas Öberg (Sweden) and Artem Popov (Russia), is certain they can take it to the next level. Hence the newly launched Indiegogo campaign.

The team is asking for $6,000 in return for 4 months of development and getting 18 million additional photographs under open licenses or in the public domain into Elog.io.

In the best traditions of talking rather than doing, LGW spoke to Jonas Öberg, CEO of Commons Machinery.

At some point in the past you stopped working on your libraries for end-user software to aid crediting collaborators, abandoned your patches for Inkscape and GIMP, and completely switched to work on Elog.io. How do you currently prioritize projects that are part of Commons Machinery?

The prototyping we did for Inkscape and GIMP had one particular goal in mind: to figure out, how much work it would be to implement support for persistent metadata, and how to get it in & out of the applications.

We sidelined those projects, when it became clear that (1) it’s a hell of a lot of work, and (2) it’s even more work, when you consider that you also need to engage the community in this work to get any changes into the core.

It’s not so much that it takes time (it does that as well, but not only), but that it’s an expensive context switch. Two persons working four hours a week, consistently, on Inkscape and GIMP, respectively, could accomplish much more over time than one person working on both Inkscape and GIMP at the same time.

What has to be done to resume the work on the projects that was put aside?

From our side, we’d love if someone would take the ownership of what we started by simply forking our code on Github and continuing development within each respective community. We’ll happily give what support we can to that person, and that person should ideally come from within each respective community.

The other options is that we eventually come back to this and start working on it again. Two things need to happen for that to work:

  • Either we find funding to do that particular work. But if someone is willing to fund that work, it’s better that they get someone from the Inkscape/GIMP communities than to pay us to do it.
  • OR, we manage well enough with Elog.io that we eventually get to the point of having tighter integration between Elog.io and applications like Inkscape/GIMP, at which point this would become relevant in part again.

That’s so far into the future though that we haven’t even begun thinking about when that could be.

What’s the status of libraries that you developed for end-user content authoring software to simplify development of metadata preservation and crediting of collaborators — libcredit and others?

I do believe they’re useful, but they’re not hooked into the Elog.io infrastructure idea. Their use would likely be in ensuring that end-user software supports and can manage with metadata at all, which is a good precondition for later hooking it into Elog.io, of course.

(Artem Popov chimes in to clarify…)

Librecontext is the new name for libremix. It works with RDF metadata (as libcredit does too), but for the catalog we switched from RDF to MediaAnnotations. So, yeah, while they’re probably useful, they’re not related to our current work on Elog.io and not useful for integrating elog.io with desktop software.

Libgetmetadata was originally written to collect metadata “on the fly”, i.e. work directly in a browser, but has been rewritten to work server-side and is seldom useful outside of very specific kind of web applications. Best way to work with the catalog now is the API.

Let’s talk about Elog.io. I’m getting an impression that Elog.io focuses on what seems achievable — helping people who have good intentions crediting other people’s work, but also on something that’s more difficult to do — finding the origin of a picture you stumble upon. This suggests fighting against, at the very least, two gorillas.

The first one is discoverability, that is, immediate availability of Chrome and Firefox extensions by default — there’s a high risk that Elog.io may not become a mainstream tool, unless it’s shipped by default and is, generally, just there. How are you going to deal with this?

You’re right, and my hope is that we’ll see not only our own plugins (which I guess can be seen more as a demonstration of what can be done) but also plugins and implementations into the main trunks of other software directly.

For instance, it doesn’t seem as if it would be a lot of work to implement support in GIMP such that when you load an image, its metadata is looked up in Elog.io and offered as default metadata within GIMP.

Oh-kay. The second creepy anthropoid would be modifications with all kinds of intent. Right now the FAQ states that the Blockhash algorithm you picked doesn’t work very well with modifications, and that you’ve deliberately set the bar at verbatim copying.

However it’s a fair use to take a CC-BY-SA licensed image and modify it to make it fit e.g. dimensions of a featured image in a blog post.

And then there’s malicious intent. As you know, technology is often abused. A common tactic to create an image to aid as a proof for e.g. fake news is to mirror the original image and then slightly retouch it, so that it doesn’t look doctored and yet cannot be used for Google image search. But that means you’d start dabbling with forensics one way or another. So, is setting the bar at verbatim copying a temporary technical decision, or a permanent ideological one, or something completely different?

It’s mostly technical, and a matter of solving more easily solvable problems first.

There are, of course, algorithms that can match images more broadly, including finding derivative works. What happens though is that as you match more derivative works, you also match more images that aren’t actually derivatives or copies of the original (false positives). For instance, a photo of a church matching another church (or even the same church, but taken by a different user).

The use case we started with requires us to be more authoritative. When we match an image, we want to be reasonably certain that we’re showing a true match. It’s less of a problem for us, if we miss a few matches that we could have made, as long as we don’t present potentially false information.

That said though: I think you could see this as a ideological stance too. We don’t believe in policing derivative works. As you say, there will always be ways around it such that algorithms can’t detect it. We don’t want that kind of arms-race. Instead, we think the focus should be on those who do want to do good.

For such cases, we can also do a lot more that doesn’t involve algorithms! Such as implementing application support that automatically registers derivative works in Elog.io and sets the right source works. That would be much better than any algorithm!

Wouldn’t that count as policing users instead of policing derivative works though?

We’re trying to do neither, of course. If we really wanted to police users, we’d implement tight restrictions in Elog.io regarding what you can and can not do, or enforce certain actions. That’s not something we’re keen to do.

We see the information in Elog.io and our presentation of it more as an informational signpost on the road, and a helpful guide as you work. An application might ask and offer the user to register their derivative work, but it wouldn’t do so automatically without consent.

Another application may tell the user that they’re about to alter a non-derivative licensed work, but it won’t disable editing functionality just because of it.

Did you discuss your technology with largest publishers of CC-licensed content such as Flickr, Soundcloud, Vimeo, YouTube etc.?

Flickr, yes. They’re very happy about what we’re doing and they don’t mind us using their research data to start with to load information from Flickr into our database. After that, we’ll need to hit their API a bit more, but this also doesn’t seem to be a concern for them. We’ve poked them a bit about getting more/easier access, and while they can do that, it’s not their top development priority yet.

What about the largest publishers of proprietary content such as Facebook, Pinterest, Instagram and others (admittedly, YouTube et al. too), some of which even drop metadata entirely? Is there some light at the end of the tunnel?

Yes and no. It depends on what your goal is. We’ve spoken to PicScout which holds a lot of proprietary content from Getty Images and many others. They don’t have a problem as such sharing information with us so we could match images from their catalogs too. The problem is that they have a legal department: they think that just by virtue of having a “Copy” button, Elog.io would indirectly support fraudulent use of images from them.

So they would like to see Elog.io disable the copy button for works that are matched against their database. That’s something that could perhaps be done, but it’s a slippery slope, because the next issue is that, as we discovered last we peeked in their database, that their catalogs also contain some public domain works which some copyright registry claims is owned by someone. So we can’t fully trust what’s in their catalog without risking excluding some public domain works.

It’s possible we can negotiate something around this, such as making sure that if a work is matched in both Wikimedia Commons and PicScout, we give preference to the Commons image, but they aren’t too happy about that either, since they feel it might lead to people uploading “their” images to Wikimedia Commons under a false license.

There’s a lot of negativity regarding YouTube’s policy of blocking content because of alleged reuse of copyrighted material by the users. Do you think Elog.io could become an additional tool to improve the matching of soundtracks against the database that specifically covers a large body of public domain and otherwise free to use works of art?

The Sintel case on YouTube does come to mind, indeed. What would happen in our world is that the Blender Foundation or a member of the community would have added Sintel to Elog.io (or their own Elog.io catalog installation; we’ve built with a distributed design in mind). YouTube, upon receiving a notice of infringement, would look up that same work in the Elog.io catalog, and if there’s a match with conflicting information from the infringement notice, it would keep the video up and trigger a flag in YouTube’s system that a person, rather than a computer, needs to evaluate this conflict and determine what to do.

At the moment, content holders are completely trusting towards information given to them by content creators like Sony. Having an open catalog of information about what digital works are in the public domain or openly licensed would be useful for content holders to have something to push back against the content creators with. Whether that would happen in practice remains to be seen, but we continue to be hopeful.

Let’s say, we are now in 2016. There are hundreds of millions items in Elog.io database. Your next step?

Read/write. I’d hope we’d get there sooner than 2016, but I’m sure reality would get in the way. This is where we make it possible to accept and curate user-contributed information to Elog.io, both for your own personal use and to help curate more and higher quality metadata about works being shared and used.

It’s hard not to notice that Commons Machinery is only two people now which, presumably, contributed to lower “appetites” with regards to requested funding (as compared to your first campaign). What happened?

Partly money: we did the basic research & development with the support from the Shuttleworth Foundation, as you know. They supported this for a period of two years, but decided that now that we’ve shown that the technology work and it’s just a matter of scaling up and continuing development, they don’t learn anything new from continuing to support us.

So we’re slowly learning to stand on our own now. If we’re successful with the IndieGoGo campaign, I think that sets us on a good path forward.

Read More

nget: Sometimes a hammer is just a hammer

I take a sidelong approach to a lot of the newsgroup tools I try. Many of them seem well-thought-out, and are no doubt of considerable use to the people who rely on them. But newsgroups on the whole tend to disappoint me, so a lot of the usefulness is a side note.

nget is probably a good example of that. As a straight-shot command line tool, I have no doubt it does marvelous work for some people.

2015-02-01-l3-b7175-nget-01 2015-02-01-l3-b7175-nget-02 2015-02-01-l3-b7175-nget-03

It hammers out the job in a very old-fashioned, traditional way, with an .nget5 folder holding an .ngetrc file that needs to be edited with the name of a news server (I used news.aioe.org again, and had no problems) before it will run.

Then you have a long list of flag options for nget, depending on what you want it to do — download unread messages, pull in attachments based on a name filter, or whatever you like. In that sense, it’s very very flexible.

But from where I sit, it’s not very exciting. You can see some of its output in the images above, and that plus error messages or connection reports seems to be all it will tell you.

I could expect more, but that’s where my general disinterest in newsgroups starts to kick in. Perhaps you’ll find it more interesting or useful than I did; in any case, I can vouch for it working acceptably, and as promised.

But if you’re looking for something a little more interactive, a little less cryptic and maybe even a little more colorful, there are other tools available that can simplify the newsgroup experience. Keep an open mind.

After all, sometimes a hammer is just a hammer. And sometimes what you want and need … is a hammer.

Tagged: client, news, newsgroup, reader, usenet

Read More

Inkscape 0.91 released

Over 4 years in the making, thousands of commits, dozens of new features: it’s all there, neatly packed in the newly released Inkscape 0.91.

Inkscape has become the go-to generic vector graphics editor on Linux, and while its Windows and Mac ports feel less native, there is a strong community of Inkscape users on those platforms as well.

Open Clip Art illustration in Inkscape 0.91

Some of the most important changes in this release are:

  • new tool for measuring distances and angles;
  • multithreaded, hence faster SVG filters rendering;
  • support for font faces beyond bold/italic;
  • vastly improved support for Corel DRAW, EMF, and WMF files;
  • newly added support for Microsoft Visio diagrams and stencils;
  • real world units support;
  • symbols library for reusing design elements;
  • Cairo-based rendering and PNG exporting.

And here is quick video review of 10 personal favorites among new features and improvements.

It might have something to do with having been over 4 years in the making, but apart from exciting new features, Inkscape 0.91 features more than a handful of usability improvements.

Here’s just a quick list:

  • Layers can be freely reordered by dragging and dropping. It’s mind-boggling that this was not available before, but now it’s there.
  • Size of on-canvas controls is now configurable (Edit -> Preferences -> Input/Output -> Input Devices -> Handle size). If you think that nodes et al. shouldn’t be so big that they obstruct actual drawing, you can easily change that now.
  • Keyboard shortcuts are finally configurable in the Preferences dialog, not text editor. Self-explanatory.
  • For panning you can now press Space, click and drag the mouse pointer.
  • Division boolean operation finally works sensibly on shapes and paths in cases where you cut them with a line.
  • Selecting objects with same fill and/or stroke is a few clicks away.

As you can see, those are important everyday things, stuff that directly affects productivity.

There are even less visible features like extensions that simplify adding SVG filters to objects by providing sensible controls and live preview:

Specular Light extension in Inkscape 0.91

And the list goes on. Needless to say, Inkscape 0.91 does feel like a major improvement over the previous version. And yet, if you’ve been around for long enough, there are some important questions you can’t help yourself asking. So we spoke to Bryce Harrington and Tavmjong Bah, both Inkscape Board members and long-time contributors to the project.

Inkscape 0.91 was really long is the works for a number of reasons. There were quite a few calls for shorter release cycles in the past, but somehow it doesn’t seem to work again yet. What are your plans of dealing with this in the future?

Tavmjong: We have a major under-the-hood change in 0.91 (switching to the Cairo renderer). This introduced a lot of regressions that took awhile to fix, the major one being in bitmap scaling which required waiting for a fix in Cairo. While this work was happening we had other smaller, but still important changes going on with our code base which complicated things.

Bryce: Like you mention, there were many reasons why the release took so long. Similarly, there’s many different things that need to be done to ensure it goes more swiftly in the future. But there’s a few highlights to mention.

For one thing, we need to get back on track with a well-defined roadmap, and to restrain ourselves from undertaking too many big changes in one go. Either spread out large changes over multiple releases, or limit the number of planned changes in each specific release.

As a project, we like to hold off on releases until everything is perfect; so, when we introduce a big new change, it takes time for us to stabilize everything, and if we do too many big changes, we have to wait for all of them to stabilize.

Along those lines, we’re also working to incorporate better automated testing. Johan’s jenkins work is the spear tip here, but hopefully we’ll see test suites getting fleshed out and other novel testing mechanics put into play. The hope here is to decrease the amount of time needed for stabilizing big changes, and to avoid the churn of regressions from day-to-day development activity.

Third, there are various infrastructural updates and changes that will help simplify the release procedure. Right now, to cut a source package requires manually touching over a dozen files — this is because we have several different build systems that all use incompatible config files. So, changing our build systems, scripting the release process, and so on can help minimize the actual effort required to do releases.

In the future, we need to break major changes into smaller, easier to manage chunks.

Pretty much every release since v0.41 (released in 2006) had some performance improvements. However Inkscape still doesn’t handle large documents (lots of objects and nodes) very well. Are there some improvements you can think of that might improve the situation? Do you think SVG imposes some difficulties here, like some developers claim?

Tavmjong: I don’t believe SVG is the issue here. The biggest issue, I think, is that our code just isn’t very efficient. Try putting a random print statement in a function and then see how many times it is called when doing a simple operation like moving a node. We have too many signals being triggered and handled. Our code lacks proper inline documentation that results in “cargo-cult programming” (a term I just learned).

Bryce: With performance analysis, it’s important not to pick the solution before understanding the problem, so I hesistate on giving specific ideas. It can be easy to micro-optimize a feature here and a feature there yet leave the overall user experience poor.

First what we need is a set of scripted workloads: loading up a large file, or zooming in or out, or toggling gradients on and off, or so on. Then use performance tools (e.g. linux perf) to identify potential locations for optimization. By the way, this first step is a lot of grunt work but not terribly technical, and a great entry-level task for someone who has ample interest.

Next is the hard work of looking for ways to simplify logic, switch to more efficient algorithms, cache intermediary results, batch disk I/O, avoid unnecessary rendering, or whatever.

Then, we run those optimizations through the original performance test suite to ensure the changes cause no regressions.

Aside from that, the other area to look into is the renderer, i.e. Cairo. We’ve discussed perhaps looking more into Cairo GL, for experimentation if nothing else. There are also other 2D renderers out there if we want to look beyond Cairo, although, in my opinion, they all seem to involve some painful trade-off or other.

For years I’ve been registering a certain notion/attitude in the community that Inkscape and professional printing as in CMYK PDF exporting and spot colors support is a lost cause. Do you think it still might happen? After all, Cairo was recently suggested to be wrong architecture component to attack this, and Scribus’s own libpdf is heavily dependent on Scribus’s document object model. Besides, developers who are competent in printing and have spare time on their hands are quite a bit of an urban myth. So, what’s your take on this?

Tavmjong: I don’t think Cairo in necessarily the wrong architecture, but it does seem have some missing functionality. I don’t think it is a lost cause, but, as you point out, it will take someone with the proper skill set and motivation to get it done.

Bryce: This may be a good example of an area where we need to use our funded development processes, and perhaps an associated fundraiser, to move things along here.

The upcoming v0.92 already has some exciting changes like the much anticipated fillet/chamfer live path effect or live Spiro path preview. What do you want to tackle next?

Bryce: For me personally: defining the roadmap, then kickstarting the funded development stuff. After that, time to start the 0.92 release. And plenty of board work to keep me busy between those.

Tavmjong: Personally, SVG 2 things of which auto-flowed text is at the top (I just added to trunk basic, experimental support for SVG 2 text in a shape).

How does the development and decision-making process works these days in the project? How do you make a decision what feature ideas are sensible/not sensible?

Tavmjong: Mostly, the people doing the work decide. We’ve always had a very open policy to checking in code. We haven’t had any time where a major piece of code has been checked in that the community has balked at.

Bryce: Make a proposal, get it peer-reviewed, and when the consensus is in favor, then bang out the code, and land it, when you feel it’s ready.

I believe that the people doing the work deserve the decision making power. Given that release work and board stuff consumes nearly all my available freetime, this means I pretty much defer feature idea decisions to other folk that are actually touching the code.

I facilitate and break logjams here and there when asked, but the project seems to be pretty good at reaching consensus collaboratively and making decisions collectively. Aside from perhaps nailing these decisions down into a roadmap, I see no reason to change how things are done.

In big projects like Inkscape, there’s usually some internal work going on. What are the biggest under-the-hood changes lately and what still needs doing? How do these changes affect the “front-end”, the user-visible part?

Tavmjong: There has been some code clean-up. For example, I reworked the style handling code to make it easier to maintain. I’ve also added a lot of SVG 2 things which for the moment are hidden (see next question).

Tav, for the past several years you’ve been actively involved with the development of SVG 2. What do you think are the most user-visible benefits of this that one could lay his/her hands on in 0.91 or, possibly, 0.92?

Tavmjong: In 0.91, there is very little user-visible benefits as the SVG 2 features that are supported in rendering are not exposed to the GUI (CSS blend modes, stroke behind fill, arrowheads that automatically point the right way, markers that automatically match stroke color, etc.).

Before adding these new things to the GUI either one of two things need to happen: 1) there must be widespread browser support for the feature or 2) we need to provide an SVG 1.1 fallback for it. We don’t have at the moment a framework for providing SVG 1.1 fallbacks.

Does it affect features like extra blending modes too? There’s a quite a lot of people dying for soft light etc. support.

Rendering support for all 16 CSS blending modes is there using the ‘mix-blend-mode’ property. Additional blending modes are also supported in filters. None of this is exposed via the GUI.

What’s the deal with gradient meshes, and why was this feature disabled for 0.91?

Meshes are not turned on for a variety of reasons: the specification is not 100% stable (there was a recent “bike-shedding” type change decided by the SVG working group which hasn’t made it into the spec or into Inkscape), no browsers support meshes yet, and the GUI is not 100% functional and stable.

I do plan on enabling meshes in trunk so they can get more attention but whether they stay enabled in 0.92 will have to be seen. I am currently working on adding “auto-smoothing” to meshes and will present a proposal for this at the SVG meeting in Sydney next month.

SVG and CSS are being developed in sync by the respective W3C working groups for the past several years. How well is Inkscape doing with regards to supporting this recent trend?

Tavmjong: Yes, a lot of what was in SVG is being moved into specs that can be shared between SVG and CSS (transforms, blending, filter effects, etc.). This is usually a good thing as it means that the browsers are more likely to support things in SVG if they need to code it for CSS/HTML.

It brings new things to SVG such as HSL colors since SVG 2 references the latest CSS color spec. It can be problematic in some case as a problem that could easily be solved within the SVG working group must now be passed by the CSS working group which isn’t always easy or quick (e.g. changing ‘image-rendering’ so that it better reflects how authors want there bitmaps scaled).

In some cases, there is nothing to be done on Inkscape’s side as the CSS spec is basically identical to what was in SVG 1.1 (e.g. basic filter effects). In other cases we need to adapt our code (e.g. add HSL color support).

Tav, here’s a vanity question 🙂 Do you think your involvement with the SVG working group helps shaping SVG into a better standard for illustrators?

Yes. For example, I’ve done all the work putting mesh gradients and hatched fills into the spec as well as CSS based wrapped text.

I should mention that Inkscape has supported this work by paying for some of my travel to SVG working group meetings.

Last year the Inkscape Board announced that the project is now encouraging paid development. Has anyone approached the Board so far to have a go? Or was it postponed until after 0.91 is out?

Bryce: Certainly our top focus has been the v0.91 release, but I’m hoping we can get moving on the funded development projects. We have three projects in mind so far: GSList removal, box blur support for faster blurring, and more SVG 2 features support. We need to allocate some initial funds and then set up fund raising. After that we’ll be looking for applicants.

Tavmjong: We really need to get our funded development projects going. The Inkscape developers community is not very large and it would really benefit by having people being able to work on Inkscape as part of their paid work.


You can download Inkscape 0.91 for Windows and Mac, from an Ubuntu repository, and, of course, as a source code archive.

Read More