Category Archives: Linux Journal Blogs

Linux Journal Blogs

Microsoft Announces First Custom Linux Kernel, German Government Chooses Open-Source Nextcloud and More

News briefs for April 17, 2018.

Microsoft yesterday introduced Azure Sphere, a Linux-based OS and cloud
service for securing IoT devices. According to ZDNet, “Microsoft President Brad Smith introduced Azure Sphere saying, ‘After 43 years, this is the first
day that we are announcing, and will distribute, a custom Linux kernel.'”

The German government’s Federal Information Technology Centre (ITZBund)
has chosen open-source Nextcloud for its self-hosted cloud solution, iwire
reports
. Nextcloud was chosen for its strict security requirements and
scalability “both in terms of large numbers of uses and extensibility with
additional features”.

European authorities have effectively ended the Whois public database of
domain name registration, which ICANN oversees. According to The
Register
, the service isn’t compliant with the GDPR and will be illegal
as of May 25th: “ICANN now has a little over a month to come up with a
replacement to the decades-old service that covers millions of domain names
and lists the personal contact details of domain registrants, including their
name, email and telephone number.”

A new release of PySoIFC, a free and open-source collection of
more than 1,000 card Solitaire and Mahjong games, was announced
recently. The new stable release, 2.2.0, is the first since 2009.

Deadline for proposals to speak at Open
Source Summit North America
is April 29. OSSN
is being held in Vancouver, BC, this year from August 29–31.

In other event news, Red Hat today announced the keynote speakers and agenda for its
largest ever Red Hat Summit being held at the Moscone Center in San Francisco, May 8–10.

Read More

Bassel Khartabil Free Fellowship, GNOME 3.28.1 Release, New Version of Mixxx and More

News briefs for April 16, 2018.

The Bassel Khartabil Free Fellowship was awarded yesterday to Majd
Al-shihabi, a Palestinian-Syrian engineer and urban planning graduate based
in Beirut, Lebanon: “The Fellowship will support Majd’s efforts in
building a unified platform for Syrian and Palestinian oral history archives,
as well as the digitizing and release of previously forgotten 1940s era
public domain maps of Palestine.” The Creative Commons also announced the first three
winners of the Bassel Khartabil Memorial Fund: Egypt-based The Mosireen
Collective, and Lebanon-based Sharq.org and ASI-REM/ADEF Lebanon. For all the
details, see the announcement
on the Creative
Commons website.

GNOME 3.28 is ready for prime time after receiving its first point
release on Friday, which includes numerous improvements and bug fixes. See
the announcement
for all the details on version 3.28.1.

Apache Subversion 1.10 has been released. This version is “a superset of
all previous Subversion releases, and is as of the time of its release
considered the current “best” release. Any feature or bugfix in 1.0.x through
1.9.x is also in 1.10, but 1.10 contains features and bugfixes not present in
any earlier release. The new features will eventually be documented in a 1.10
version of the free Subversion
book
.” New features include improved path-based authorization, new
interactive conflict resolver, added support for LZ4 compression and more.
See the release
notes
for more information.

A new version of Mixxx, the free and open-source DJ software, was released
today. Version 2.1 has “new and improved controller mappings, updated Deere
and LateNight skins, overhauled effects system, and much more”.

Kayenta, a new
open-source project from Google and Netflix for automated deployment
monitoring was announced
recently. GeekWire reports that the project’s goal is “to help other companies
that want to modernize their application deployment practices but don’t
exactly have the same budget and expertise to build their own solution.”

Read More

Multiprocessing in Python

Multiprocessing in Python

Image
python logo

Reuven M. Lerner
Mon, 04/16/2018 – 09:20

Python’s “multiprocessing” module feels like threads, but actually launches
processes.

Many people, when they start to work with Python, are excited to hear
that the language supports threading. And, as I’ve discussed in previous
articles,
Python does indeed support native-level threads
with an easy-to-use and convenient interface.

However, there is a downside to these threads—namely the global
interpreter lock (GIL), which ensures that only one thread runs at a
time. Because a thread cedes the GIL whenever it uses I/O, this means
that although threads are a bad idea in CPU-bound Python programs, they’re a
good idea when you’re dealing with I/O.

But even when you’re using lots of I/O, you might prefer to take full
advantage of a multicore system. And in the world of Python, that
means using processes.

In my article “Launching
External Processes in Python”
, I described how you can launch processes from within a Python
program, but those examples all demonstrated that you can launch a
program in an external process. Normally, when people talk about
processes, they work much like they do with threads, but are even more
independent (and with more overhead, as well).

So, it’s something of a dilemma: do you launch easy-to-use
threads, even though they don’t really run in parallel? Or, do you
launch new processes, over which you have little control?

The answer is somewhere in the middle. The Python standard library
comes with “multiprocessing”, a module that gives the feeling of
working with threads, but that actually works with processes.

So in this article, I look at the “multiprocessing” library and describe some of the basic things it can do.

Multiprocessing Basics

The “multiprocessing” module is designed to look and feel like the
“threading” module, and it largely succeeds in doing so. For example,
the following is a simple example of a multithreaded program:


#!/usr/bin/env python3

import threading
import time
import random

def hello(n):
    time.sleep(random.randint(1,3))
    print("[{0}] Hello!".format(n))

for i in range(10):
    threading.Thread(target=hello, args=(i,)).start()

print("Done!")

In this example, there is a function (hello) that prints
“Hello!”
along with whatever argument is passed. It then runs a for loop that
runs hello ten times, each of them in an independent thread.

But wait. Before the function prints its output, it first sleeps for a
few seconds. When you run this program, you then end up with output that
demonstrates how the threads are running in parallel, and not
necessarily in the order they are invoked:


$ ./thread1.py
Done!
[2] Hello!
[0] Hello!
[3] Hello!
[6] Hello!
[9] Hello!
[1] Hello!
[5] Hello!
[8] Hello!
[4] Hello!
[7] Hello!

If you want to be sure that “Done!” is printed after all the threads
have finished running, you can use join. To do that, you need to grab
each instance of threading.Thread, put it in a list, and then invoke
join on each thread:


#!/usr/bin/env python3

import threading
import time
import random

def hello(n):
    time.sleep(random.randint(1,3))
    print("[{0}] Hello!".format(n))

threads = [ ]
for i in range(10):
    t = threading.Thread(target=hello, args=(i,))
    threads.append(t)
    t.start()

for one_thread in threads:
    one_thread.join()

print("Done!")

The only difference in this version is it puts the thread
object in a list (“threads”) and then iterates over that list,
joining them one by one.

But wait a second—I promised that I’d talk about
“multiprocessing”, not threading. What gives?

Well, “multiprocessing” was designed to give the feeling of working
with threads. This is so true that I basically can do some
search-and-replace on the program I just presented:

  • threading → multiprocessing
  • Thread → Process
  • threads → processes
  • thread → process

The result is as follows:


#!/usr/bin/env python3

import multiprocessing
import time
import random

def hello(n):
    time.sleep(random.randint(1,3))
    print("[{0}] Hello!".format(n))

processes = [ ]
for i in range(10):
    t = multiprocessing.Process(target=hello, args=(i,))
    processes.append(t)
    t.start()

for one_process in processes:
    one_process.join()

print("Done!")

In other words, you can run a function in a new process, with full
concurrency and take advantage of multiple cores, with
multiprocessing.Process. It works very much like a thread, including
the use of join on the Process objects you create. Each instance of
Process represents a process running on the computer, which
you can see
using ps, and which you can (in theory) stop with
kill.

What’s the Difference?

What’s amazing to me is that the API is almost identical, and yet two
very different things are happening behind the scenes. Let me try to
make the distinction clearer with another pair of examples.

Perhaps the biggest difference, at least to anyone programming with
threads and processes, is the fact that threads share global
variables. By contrast, separate processes are completely separate; one
process cannot affect another’s variables. (In a future article, I plan
to look at how
to get around that.)

Here’s a simple example of how a function running in a thread can
modify a global variable (note that what I’m doing here is to prove a
point; if you really want to modify global variables from within a
thread, you should use a lock):


#!/usr/bin/env python3

import threading
import time
import random

mylist = [ ]

def hello(n):
    time.sleep(random.randint(1,3))
    mylist.append(threading.get_ident())   # bad in real code!
    print("[{0}] Hello!".format(n))

threads = [ ]
for i in range(10):
    t = threading.Thread(target=hello, args=(i,))
    threads.append(t)
    t.start()

for one_thread in threads:
    one_thread.join()

print("Done!")
print(len(mylist))
print(mylist)

The program is basically unchanged, except that it defines a new, empty
list (mylist) at the top. The function appends its ID to that list
and then returns.

Now, the way that I’m doing this isn’t so wise, because Python data
structures aren’t thread-safe, and appending to a list from within
multiple threads eventually will catch up with you. But the point
here isn’t to demonstrate threads, but rather to contrast them with
processes.

When I run the above code, I get:


$ ./th-update-list.py
[0] Hello!
[2] Hello!
[6] Hello!
[3] Hello!
[1] Hello!
[4] Hello!
[5] Hello!
[7] Hello!
[8] Hello!
[9] Hello!
Done!
10
[123145344081920, 123145354592256, 123145375612928,
 ↪123145359847424, 123145349337088, 123145365102592,
 ↪123145370357760, 123145380868096, 123145386123264,
 ↪123145391378432]

So, you can see that the global variable mylist is shared by the
threads, and that when one thread modifies the list, that change is
visible to all the other threads.

But if you change the program to use “multiprocessing”, the output
looks a bit different:


#!/usr/bin/env python3

import multiprocessing
import time
import random
import os

mylist = [ ]

def hello(n):
    time.sleep(random.randint(1,3))
    mylist.append(os.getpid())
    print("[{0}] Hello!".format(n))

processes = [ ]
for i in range(10):
    t = multiprocessing.Process(target=hello, args=(i,))
    processes.append(t)
    t.start()

for one_process in processes:
    one_process.join()

print("Done!")
print(len(mylist))
print(mylist)

Aside from the switch to multiprocessing, the biggest change in this
version of the program is the use of os.getpid to get the current
process ID.

The output from this program is as follows:


$ ./proc-update-list.py
[0] Hello!
[4] Hello!
[7] Hello!
[8] Hello!
[2] Hello!
[5] Hello!
[6] Hello!
[9] Hello!
[1] Hello!
[3] Hello!
Done!
0
[]

Everything seems great until the end when it checks the value of
mylist. What happened to it? Didn’t the program append to it?

Sort of. The thing is, there is no “it” in this program. Each time
it
creates a new process with “multiprocessing”, each process has its own
value of the global mylist list. Each process thus adds to its own
list, which goes away when the processes are joined.

This means the call to mylist.append succeeds, but it succeeds in
ten different processes. When the function returns from executing in
its own process, there is no trace left of the list from that
process. The only mylist variable in the main process remains empty,
because no one ever appended to it.

Queues to the Rescue

In the world of threaded programs, even when you’re able to append to
the global mylist variable, you shouldn’t do it. That’s because
Python’s data structures aren’t thread-safe. Indeed, only one data
structure is guaranteed to be thread safe—the Queue class in the
multiprocessing module.

Queues are FIFOs (that is, “first in, first out”). Whoever wants to add
data to a queue invokes the put method on the queue. And whoever
wants to retrieve data from a queue uses the get command.

Now, queues in the world of multithreaded programs prevent issues
having to do with thread safety. But in the world of multiprocessing,
queues allow you to bridge the gap among your processes, sending data
back to the main process. For example:


#!/usr/bin/env python3

import multiprocessing
import time
import random
import os
from multiprocessing import Queue

q = Queue()

def hello(n):
    time.sleep(random.randint(1,3))
    q.put(os.getpid())
    print("[{0}] Hello!".format(n))

processes = [ ]
for i in range(10):
    t = multiprocessing.Process(target=hello, args=(i,))
    processes.append(t)
    t.start()

for one_process in processes:
    one_process.join()

mylist = [ ]
while not q.empty():
    mylist.append(q.get())

print("Done!")
print(len(mylist))
print(mylist)

In this version of the program, I don’t create mylist until late in
the game. However, I create an instance of
multiprocessing.Queue
very early on. That Queue instance is designed to be shared across the
different processes. Moreover, it can handle any type of Python data
that can be stored using “pickle”, which basically means any data
structure.

In the hello function, it replaces the call to
mylist.append with
one to q.put, placing the current process’ ID number on the queue.
Each of the ten processes it creates will add its own PID to the queue.

Note that this program takes place in stages. First it launches ten
processes, then they all do their work in parallel, and then it waits
for them to complete (with join), so that it can process the
results. It pulls data off the queue, puts it onto mylist, and then
performs some calculations on the data it has retrieved.

The implementation of queues is so smooth and easy to work with,
it’s easy to forget that these queues are using some serious
behind-the-scenes operating system magic to keep things
coordinated. It’s easy to think that you’re working with threading,
but that’s just the point of multiprocessing; it might feel like
threads, but each process runs separately. This gives you true
concurrency within your program, something threads cannot do.

Conclusion

Threading is easy to work with, but threads don’t truly execute in parallel.
Multiprocessing is a module that provides an API that’s almost
identical to that of threads. This doesn’t paper over all of the
differences, but it goes a long way toward making sure things aren’t
out of control.

Read More

FOSS Project Spotlight: Ravada

FOSS Project Spotlight: Ravada

Image
camel

Francesc Guasch
Fri, 04/13/2018 – 13:48

Ravada is an open-source project that allows users to connect to a
virtual desktop.

Currently, it supports KVM, but its back end has been designed and
implemented in order to allow future hypervisors to be added to the
framework. The client’s only requirements are a web-browser and a remote
viewer supporting the spice protocol.

Ravada’s main features include:

  • KVM back end.
  • LDAP and SQL authentication.
  • Kiosk mode.
  • Remote access for Windows and Linux.
  • Light and fast virtual machine clones for each user.
  • Instant clone creation.
  • USB redirection.
  • Easy and customizable end-user interface (i18n, l10n).
  • Administration from a web browser.

It’s very easy to install and use. Following the documentation, virtual
machines can be deployed in minutes. It’s an early release, but it’s
already used
in production. The project is open source, and you can download the code
from GitHub. Contributions welcome!

choose a screenList of virtual machines

Read More

Elisa Music Player Debuts, Zenroom Crypto-Language VM Reaches Version 0.5.0 and More

News briefs for April 13, 2018.

The Elisa music player, developed by the KDE community, debuted yesterday, with
version 0.1. Elisa has good integration wtih the Plasma desktop and also supports
other Linux desktop environments, as well as Windows and Android. In addition, the
Elisa release announcement notes, “We are creating a reliable product that is a joy to
use and respects our users’ privacy. As such, we will prefer to support online services
where users are in control of their data.”

Mozilla released Firefox 11.0 for iOS yesterday, and this new version turns on
tracking protection by default
. The feature uses a list provided by Disconnect to
identify trackers, and it also provides options for turning it on or off overall or
for specific websites.

The Zenroom project, a brand-new crypto-language
virtual machine, has reached version 0.5.0. Zenroom’s goal is “improving people’s awareness of how their data is
processed by algorithms, as well facilitate the work of developers to create and
publish algorithms that can be used both client and server side.” In addition, it
“has no
external dependencies, is smaller than 1MB, runs in less than 64KiB memory and is
ready for experimental use on many target platforms: desktop, embedded, mobile, cloud
and browsers.” The program is free software and is licensed under the GNU LGPL v3.
Its main use case is “distributed computing of untrusted code where advanced
cryptographic functions are required”.

ZFS On Linux, recently in the news for data-loss issues, may finally
be getting SSD TRIM
support
, which has been in the works for years, according to Phoronix.

System76 recently became a GNOME Foundation Advisory Board member. Neil McGovern,
Executive Director of the GNOME Foundation, commented “System76’s long-term ambition
to see free software grow is highly commendable, and we’re extremely pleased that
they’re coming on board to help support the Foundation and the community.” See the betanews
article
for more details.

Read More

Facebook Compartmentalization

Facebook Compartmentalization

Image

Kyle Rankin
Thu, 04/12/2018 – 10:06


I don’t always use Facebook, but when I do, it’s over a
compartmentalized browser over Tor.

Whenever people talk about protecting privacy on the internet, social-media sites like
Facebook inevitably come up—especially right now. It makes sense—social
networks (like Facebook) provide a platform where you can share your
personal data with your friends, and it doesn’t come as much of a surprise
to people to find out they also share that data with advertisers (it’s
how they pay the bills after all). It makes sense that Facebook uses
data you provide when you visit that site. What some people might
be surprised to know, however, is just how much. Facebook tracks them
when they aren’t using Facebook itself but just browsing around the web.

Some readers may solve the problem of Facebook tracking by saying
“just don’t use Facebook”; however, for many people, that site may be the
only way they can keep in touch with some of their friends and family members.
Although I don’t post
on Facebook much myself, I do have an account and use it to keep in
touch with certain friends. So in this article, I explain how I employ
compartmentalization principles to use Facebook without leaking too much
other information about myself.

1. Post Only Public Information

The first rule for Facebook is that, regardless of what you think your
privacy settings are, you are much better off if you treat any content
you provide there as being fully public. For one, all of those different
privacy and permission settings can become complicated, so it’s easy to
make a mistake that ends up making some of your data more public than
you’d like. Second, even with privacy settings in place, you don’t have
a strong guarantee that the data won’t be shared with people willing to
pay for it. If you treat it like a public posting ground and share
only data you want the world to know, you won’t get any surprises.

2. Give Facebook Its Own Browser

I mentioned before that Facebook also can track what you do when you
browse other sites. Have you ever noticed little Facebook “Like” icons
on other sites? Often websites will include those icons to help increase
engagement on their sites. What it also does, however, is link the fact
that you visited that site with your specific Facebook account—even
if you didn’t click “Like” or otherwise engage with the site. If you
want to reduce how much you are tracked, I recommend selecting a separate
browser that you use only for Facebook. So if you are a Firefox user, load
Facebook in Chrome. If you are a Chrome user, view Facebook in Firefox. If
you don’t want to go to the trouble of managing two different browsers,
at the very least, set up a separate Firefox profile (run firefox -P from
a terminal) that you use only for Facebook.

3. View Facebook over Tor

Many people don’t know that Facebook itself offers a .onion service that allows you
you to view Facebook over Tor. It may seem counterintuitive that a site
that wants so much of your data would also want to use an anonymizing
service, but it makes sense if you think it through. Sure, if you access
Facebook over Tor, Facebook will know it’s you that’s accessing it,
but it won’t know from where. More important, no other sites on the
internet will know you are accessing Facebook from that account, even if
they try to track via IP.

To use Facebook’s private .onion service, install the Tor Browser Bundle,
or otherwise install Tor locally, and follow the Tor documentation to
route your Facebook-only browser to its SOCKS proxy service. Then visit
https://facebookcorewwwi.onion, and only you and Facebook will know you
are hitting the site. By the way, one advantage to setting up a separate
browser that uses a SOCKS proxy instead of the Tor Browser Bundle is
that the Tor Browser Bundle attempts to be stateless, so you will have
a tougher time making the Facebook .onion address your home page.

Conclusion

So sure, you could decide to opt out of Facebook altogether, but if you
don’t have that luxury, I hope a few of these compartmentalization
steps will help you use Facebook in a way that doesn’t completely remove
your privacy.

Read More

Mozilla's Internet Health Report, Google's Fuchsia, Purism Development Docs and More

News briefs for April 12, 2018.

Mozilla recently published its annual Internet Health Report. Its three major concerns are:

  • “Consolidation of power over the Internet, particularly by Facebook, Google, Tencent, and Amazon.”
  • “The spread of ‘fake news,’ which the report attributes in part to the ‘broken online advertising economy’ that provides financial incentive for fraud, misinformation, and abuse.”
  • The threat to privacy posed by the poor security of the Internet of Things.

(Source: Ars Technica’s “The Internet has serious health problems, Mozilla Foundation report finds”)

Idle power on some Linux systems could drop by 10% or more with the Linux 4.17 kernel, reports Phoronix. Evidently, that’s not all that’s in the works regarding power management features: “performance of workloads where the idle loop overhead was previously significant could now see greater gains too”. See Rafael Wysocki’s “More power management updates for v4.17-rc-1” pull request.

Google’s “not-so-secret” operating system named Fuchsia that’s been in development for almost two years has attracted much speculation, but now we finally know what it is not. It’s not Linux. According to a post on xda, Google published a documentation page called “the book” that explains what Fuchsia is and isn’t. Several details still need to be filled in, but documentation will be added as things develop.

Instagram will soon allow users to download their data, including photos, videos and messages, according to a TechCrunch report: “This tool could make it much easier for users to leave Instagram and go to a competing image social network. And as long as it launches before May 25th, it will help Instagram to comply with upcoming European GDPR privacy law that requires data portability.”

Purism has started its developer docs effort in anticipation of development boards being shipped this summer. According to the post on the Purism website, “There will be technical step-by-step instructions that are suitable for both newbies and experienced Debian developers alike. The goal of the docs is to openly welcome you and light your path along the way with examples and links to external documentation.” You can see the docs here.

Read More

Promote Drupal Initiative Announced at DrupalCon

Promote Drupal Initiative Announced at DrupalCon

Image

Katherine Druckman
Wed, 04/11/2018 – 11:03

Yesterday’s Keynote from Drupal project founder, Dries Buytaert, kicked off the annual North American gathering of Drupalists from around the world, and also kicked off a new Drupal community initiative aimed at promoting the Drupal platform through a coordinated marketing effort using funds raised within the community.

The Drupal Association hopes to raise $100,000 to enable a global group of staff and volunteers to complete the first two phases of a four-phase plan to create consistent and reusable marketing materials to allow agencies and other Drupal promoters to communicate Drupal’s benefits to organizations and potential customers quickly and effectively.

Convincing non-geeks and non-technical decision-makers of Drupal’s strengths has always been a pain point, and we’ll be watching with great interest as this initiative progresses.

Also among the announcements were demonstrations of how easy it could soon be to manipulate content within the Drupal back end using a drag-and-drop interface, which would provide great flexibility for site builders and content editors.

We also expect to see improvements to the Drupal site-builder experience in upcoming releases, as well as improvements to the built-in configuration management process, which eases the deployment process when developing in Drupal.

See the full keynote to get inspired by what’s to come in the Drupalverse.

And also see the DrupalCon Nashville Playlist!

Read More

OSI's Simon Phipps on Open Source's Past and Future

OSI’s Simon Phipps on Open Source’s Past and Future

Image

Christine Hall
Wed, 04/11/2018 – 09:20

With an eye on the future, the Open Source Initiative’s president
sits down and talks with Linux Journal about the organization’s
20-year
history.

It would be difficult for anyone who follows Linux and open source to
have missed the 20th birthday
of open source
in early February. This was a dual celebration,
actually, noting the passing of 20 years since the term “open source” was
first coined and since the formation of the Open Source Initiative (OSI), the
organization that decides whether software licenses qualify to wear that
label.

The party came six months or so after Facebook was successfully convinced
by the likes of the Apache Foundation; WordPress’s developer, Automatic;
the Free Software Foundation (FSF); and OSI to change
the licensing of its popular React project
away from the BSD +
Patents license, a license that had flown under the radar for a while.

The brouhaha began when Apache developers noticed a term in the license
forbidding the suing of Facebook over any patent issues, which was
troublesome because it gave special consideration to a single entity,
Facebook, which pretty much disqualified it from being an open-source
license.

Although the incident worked out well—after some grumblings Facebook
relented and changed the license to MIT—the Open Source Initiative
fell under some criticism for having approved the BSD + Patents license,
with some people suggesting that maybe it was time for OSI to be rolled
over into an organization such as the Linux Foundation.

The problem was that OSI had never approved the BSD + Patents.

Simon Phipps delivers the keynote at Kopano Conference 2017 in
Arnhem, the Netherlands.

“BSD was approved as a license, and Facebook decided that they would add
the software producer equivalent of a signing statement to it”, OSI’s
president, Simon Phipps, recently explained to Linux Journal. He
continued:

They
decided they would unilaterally add a patent grant with a defensive
clause in it. They found they were able to do that for a while simply
because the community accepted it. Over time it became apparent to people
that it was actually not an acceptable patent grant, that it unduly
favored Facebook and that if it was allowed to grow to scale, it would
definitely create an environment where Facebook was unfairly
advantaged.

He added that the Facebook incident was actually beneficial for OSI and
ended up being a validation of the open-source approval process:

I think the consequence of that encounter is that more people are now
convinced that the whole licensing arrangement that open-source software
is under needs to be approved at OSI.

I think prior to that,
people felt it was okay for there just to be a license and then for there
to be arbitrary additional terms applied. I think that the consensus of
the community has moved on from that. I think it would be brave for a
future software producer to decide that they can add arbitrary terms
unless those arbitrary terms are minimally changing the rights and
benefits of the community.

As for the notion that OSI should be folded into a larger organization
such as the Linux Foundation?

“When I first joined OSI, which was back in 2009 I think, I shared that
view”, Phipps said. He continued:

I felt that OSI had done its job and could be put
into an existing organization. I came to believe that wasn’t the case,
because the core role that OSI plays is actually a specialist role. It’s
one that needs to be defined and protected. Each of the organizations I
could think of where OSI could be hosted would almost certainly not be
able to give the role the time and attention it was due. There was a risk
there would be a capture of that role by an actor who could not be
trusted to conduct it responsibly.

That risk of the license approval role being captured is what persuaded
me that I needed to join the OSI board and that I needed to help it to
revamp and become a member organization, so that it could protect the
license approval role in perpetuity. That’s why over the last five to six
years, OSI has dramatically changed.

This is Phipps’ second go at being president at OSI. He originally served
in the position from 2012 until 2015, when he stepped down in preparation
for the end of his term on the organization’s board. He returned to the
position last year after his replacement, Allison Randal, suddenly
stepped down to focus on her pursuit of a PhD.

His return was pretty much universally seen in a positive light. During
his first three-year stint, the organization moved toward a
membership-based governance structure and started an affiliate membership
program for nonprofit charitable organizations, industry associations
and academic institutions. This eventually led to an individual
membership program and the inclusion of corporate sponsors.

Although OSI is one of the best known open-source organizations, its
grassroots approach has helped keep it on the lean side, especially when
compared to organizations like the behemoth Linux or Mozilla
Foundations. Phipps, for example, collects no salary for performing his
presidential duties. Compare that with the Linux Foundation’s executive
director, Jim Zemlin, whose salary in 2010 was reportedly north of
$300,000.

“We’re a very small organization actually”, Phipps said. He added:

We have a board
of directors of 11 people and we have one paid employee. That means the
amount of work we’re likely do behind the scenes has historically been
quite small, but as time is going forward, we’re gradually expanding our
reach. We’re doing that through working groups and we’re doing that
through bringing together affiliates for particular projects.

While the public perception might be that OSI’s role is merely the
approval of open-source licenses, Phipps sees a larger picture. According
to him, the point of all the work OSI does, including the approval
process, is to pave the way to make the road smoother for open-source
developers:

The role that OSI plays is to crystallize consensus. Rather
than being an adjudicator that makes decisions ex cathedra, we’re an
organization that provides a venue for people to discuss licensing. We
then identify consensus as it arises and then memorialize that consensus.
We’re more speaker-of-the-house than king.

That provides an extremely sound way for people to reduce the burden on
developers of having to evaluate licensing. As open source becomes more
and more the core of the way businesses develop software, it’s more and
more valuable to have that crystallization of consensus process taking
out the uncertainty for people who are needing to work between different
entities. Without that, you need to constantly be seeking legal advice,
you need to constantly be having discussions about whether a license
meets the criteria for being open source or not, and the higher
uncertainty results in fewer contributions and less collaboration.

One of OSI’s duties, and one it has in common with organizations such as
the Free Software Foundation (FSF), is that of enforcer of compliance
issues with open-source licenses. Like the FSF, OSI prefers to take a
carrot rather than stick approach. And because it’s the organization that
approves open-source licenses, it’s in a unique position to nip issues in
the bud. Those issues can run the gamut from unnecessary licenses to
freeware masquerading as open source. According to Phipps:

We don’t do that in private. We do that fairly publicly and
we don’t normally need to do that. Normally a member of the license
review mailing list, who are all simply members of the community, will go
back to people and say “we don’t think that’s distinctive”, “we don’t
think that’s unique enough”, “why didn’t you use license so and so”, or
they’ll say, “we really don’t think your intent behind this license is
actually open source.” Typically OSI doesn’t have to go and say those
things to people.

The places where we do get involved in speaking to people directly is
where they describe things as open source when they haven’t bothered to
go through that process and that’s the point at which we’ll communicate
with people privately.

The problem of freeware—proprietary software that’s offered without
cost—being marketed under the open-source banner is particularly
troublesome. In those cases, OSI definitely will reach out and contact
the offending companies, as Phipps says,
“We do that quite often, and we have a good track record of helping
people understand why it’s to their business disadvantage to behave in
that way.”

One of the reasons why OSI is able to get commercial software developers
to heed its advice might be because the organization has never taken an
anti-business stance. Founding member Michael Tiemann, now VP of open-source affairs at Red Hat, once said that one of the reasons the
initiative chose the term “open source” was to “dump the moralizing and
confrontational attitude that had been associated with ‘free
software’ in the past and sell the idea strictly on the same
pragmatic, business-case grounds that had motivated Netscape.”

These days, the organization has ties with many major software vendors and
receives most of its financial support from corporate sponsors. However,
it has taken steps to ensure that corporate sponsors don’t dictate OSI
policy. According to Phipps:

If you want to join a trade association, that’s what the Linux
Foundation is there for. You can go pay your membership
fees and buy a vote there, but OSI is a 501(c)(3). That’s means it’s a
charity that’s serving the public’s interest and the public benefit.

It would be wrong for us to allow OSI to be captured by corporate
interests. When we conceived the sponsorship scheme, we made sure that
there was no risk that would happen. Our corporate sponsors do not get
any governance role in the organization. They don’t get a vote over
what’s happening, and we’ve been very slow to accept new corporate
sponsors because we wanted to make sure that no one sponsor could have an
undue influence if they decided that they no longer liked us or decided
to stop paying the sponsorship fees.

This pragmatic approach, which also puts “permissive” licenses like
Apache and MIT on equal footing with “copyleft” licenses like the GPL,
has traditionally not been met with universal approval from FOSS
advocates. The FSF’s Richard Stallman has been critical of the
organization, although noting that his organization and OSI are
essentially on the same page. Years ago, OSI co-founder and creator of
The Open Source Definition, Bruce Perens, decried the “schism” between
the Free Software and Open Source communities—a schism that Phipps
seeks to narrow:

As I’ve been giving keynotes about the first 20 years and the next ten
years of open source, I’ve wanted to make very clear to people that open
source is a progression of the pre-existing idea of free software, that
there is no conflict between the idea of free software and the way it can
be adopted for commercial or for more structured use under the term open
source.

One of the things that I’m very happy about over the last five to six
years is the good relations we’ve been able to have with the Free
Software Foundation Europe. We’ve been able to collaborate with them over
amicus briefs in important lawsuits. We are collaborating with them over
significant issues, including privacy and including software patents, and
I hope in the future that we’ll be able to continue cooperating and
collaborating. I think that’s an important thing to point out, that I
want the pre-existing world of free software to have its due credit.

Software patents represent one of several areas into which OSI has been
expanding. Patents have long been a thorny issue for open source, because
they have the potential to affect not only people who develop software,
but also companies who merely run open-source software on their machines. They
also can be like a snake in the grass; any software application can be
infringing on an unknown patent. According to Phipps:

We have a new project that is just getting started, revisiting the role
of patents and standards. We have helped bring together a
post-graduate curriculum on open source for educating graduates on how to
develop open-source software and how to understand it.

We also host other organizations that need a fiduciary host so that they
don’t have to do their own bookkeeping and legal filings. For a couple
years, we hosted the Open Hatch Project, which has now wound up, and we
host other activities. For example, we host the mailing lists for the
California Association of Voting Officials, who are trying to promote
open-source software in voting machines in North America.

Like everyone else in tech these days, OSI is also grappling with
diversity issues. Phipps said the organization is seeking to deal with
that issue by starting at the membership level:

At the moment I feel that I would very much like to see a more diverse
membership. I’d like to see us more diverse
geographically. I’d like to see us more diverse in terms of the
ethnicity and gender of the people who are involved. I would like to
see us more diverse in terms of the businesses from which people are
employed.

I’d like to see all those improve and so, over the next few years
(assuming that I remain president because I have to be re-elected every
year by the board) that will also be one of the focuses that I have.

And to wrap things up, here’s how he plans to go about that:

This year is the anniversary year, and we’ve been able to arrange for OSI
to be present at a conference pretty much every month, in some cases two
or three per month, and the vast majority of those events are global. For
example, FOSSASIA is coming up,
and we’re backing that. We are sponsoring a hostel where we’ll be having
50 software developers who are able to attend FOSSASIA because of the
sponsorship. Our goal here is to raise our profile and to recruit
membership by going and engaging with local communities globally. I think
that’s going to be a very important way that we do it.

Read More

Red Hat Enterprise Linux 7.5 Released, Valve Improves Steam Privacy Settings, New Distribution Specification Project for Containers and More

News briefs for April 11, 2018.

Red Hat Enterprise Linux 7.5 was released yesterday. New
features include “enhanced security and compliance, usability at scale, continued
integration with Windows infrastructure on-premise and in Microsoft Azure, and new
functionality for storage cost controls. The release also includes continued
investment in platform manageability for Linux beginners, experts, and Microsoft
Windows administrators.” See the release
notes
for more information.

The Open Container Initiative (OCI) yesterday announced the launch of the
Distribution
Specification Project
: “having a solid, common distribution specification with
conformance testing will ensure long lasting security and interoperability throughout
the container ecosystem”. See also “Open
Container Initiative nails down container image distribution standard”
on ZDNet
for more details.

Valve is offering new
and improved privacy settings for Steam users
, providing more detailed descriptions of the
settings so you can better manage what your friends and the wider Steam community see.
The announcement notes, “Additionally, regardless of which setting you choose for your
profile’s game details, you now have the option to keep your total game playtime
private. You no longer need to nervously laugh it off as a bug when your friends
notice the 4,000+ hours you’ve put into Ricochet.”

Thousands of websites have been hacked to give “fake update notifications to
install banking malware and remote access trojans on visitors’ computers”, according
to computer researcher Malwarebytes.
Ars
Technica
reports that “The attackers also fly under the radar by using highly obfuscated
JavaScript. Among the malicious software installed in the campaign was the Chthonic
banking malware and a commercial remote access trojan known as NetSupport.”

Krita 4.0.1 was released
yesterday. This new version fixes more than 50 bugs since the 4.0 release and includes
many improvements to the UI.

Read More