Category Archives: Linux Journal Blogs

Linux Journal Blogs

Caption This!

Caption This!

Image
Caption This!

Carlie Fairchild
Fri, 04/20/2018 – 11:21

Each month, we provide a cartoon in need of a caption. You submit your caption, we choose three finalists, and readers vote for their favorite. The winning caption for this month’s cartoon will appear in the June issue of Linux Journal.

To enter, simply type in your caption in the comments below or email us, publisher@linuxjournal.com.

Read More

Mozilla's Common Voice Project, Red Hat Announces Vault Operator, VirtualBox 5.2.10 Released and More

News briefs April 20, 2018.

Participate in Mozilla’s open-source Common Voice Project, an
initiative to help teach machines how real people speak: “Now you can donate
your voice to help us build an open-source voice database that anyone can
use to make innovative apps for devices and the web.” For more about the
Common Voice Project, see the story on
opensource.com.

Red Hat yesterday announced
the Vault Operator, a new open-source project that “aims to make it easier
to install, manage, and maintain instances of Vault—a tool designed
for storing, managing, and controlling access to secrets, such as tokens,
passwords, certificates, and API keys—on Kubernetes clusters.”

Google might be working on implementing dual-boot functionality in Chrome OS
to allow Chromebook users to boot multiple OSes. Softpedia
News reports
on a
Reddit
thread
that references “Alt OS” in recent Chromium Gerrit commits.
This
is only speculation so far, and Google has not confirmed it is working on
dual-boot support for Chrome OS on Chromebooks.

Oracle recently released
VirtualBox 5.2.10. This release addresses the CPU (Critical Patch Updates) Advisory for
April 2018 related to Oracle VM VirtualBox
and several other
improvements, including fixing a KDE Plasma hang and having multiple NVMe
controllers with ICH9 enabled. See the Changelog for all the
details.

Apple yesterday announced
it has open-sourced its FoundationDB cloud database. Apple’s goal is “to
build a community around the project and make FoundationDB the foundation
for the next generation of distributed databases”. The project is now
available on
GitHub.

Read More

More L337 Translations

More L337 Translations

Image
bash

Dave Taylor
Thu, 04/19/2018 – 09:20

Dave continues with his shell-script L33t translator.

In my last
article
, I talked about the inside jargon of hackers and computer geeks
known as “Leet Speak” or just “Leet”. Of course, that’s a shortened version
of the word Elite, and it’s best written as L33T or perhaps L337 to be
ultimately kewl. But hey, I don’t judge.

Last time I looked at a series of simple letter substitutions that allow
you to
convert a sentence like “I am a master hacker with great
skills” into something like this:


I AM A M@ST3R H@XR WITH GR3@T SKILLZ

It turns out that I missed some nuances of Leet and didn’t realize that
most often the letter “a” is actually turned into a “4”, not
an “@”, although as with just about everything about the jargon,
it’s somewhat random.

In fact, every single letter of the alphabet can be randomly tweaked and
changed, sometimes from a single letter to a sequence of two or three
symbols. For example, another variation on “a” is “/-” (for
what are hopefully visually obvious reasons).

Continuing in that vein, “B” can become “|3”, “C” can become “[“,
“I” can become “1”, and one of my favorites, “M” can
change into “[]V[]”. That’s a lot of work, but since one of the
goals is to have a language no one else understands, I get it.

There are additional substitutions: a word can have its trailing “S”
replaced by a “Z”, a trailing “ED” can become
“‘D” or just “D”, and another interesting one is that words
containing “and”, “anned” or “ant” can have that
sequence replaced by an ampersand (&).

Let’s add all these L337 filters and see how the script is shaping up.

But First, Some Randomness

Since many of these transformations are going to have a random element,
let’s go ahead and produce a random number between 1–10 to figure
out whether to do one or another action. That’s easily done with the
$RANDOM variable:


doit=$(( $RANDOM % 10 ))       # random virtual coin flip

Now let’s say that there’s a 50% chance that a -ed suffix is going
to change to “‘D” and a 50% chance that it’s just going to become
“D”, which is coded like this:


if [ $doit -ge 5 ] ;  then
  word="$(echo $word | sed "s/ed$/d/")"
else
  word="$(echo $word | sed "s/ed$/'d/")"
fi

Let’s add the additional transformations, but not do them every time.
Let’s give them a 70–90% chance of occurring, based on the transform
itself. Here are a few examples:


if [ $doit -ge 3 ] ;  then
  word="$(echo $word | sed "s/cks/x/g;s/cke/x/g")"
fi

if [ $doit -ge 4 ] ;  then
  word="$(echo $word | sed "s/and/&/g;s/anned/&/g;
     s/ant/&/g")"
fi

And so, here’s the second translation, a bit more sophisticated:


$ l33t.sh "banned? whatever. elite hacker, not scriptie."
B&? WH4T3V3R. 3LIT3 H4XR, N0T SCRIPTI3.

Note that it hasn’t realized that “elite” should become L337 or
L33T, but since it is supposed to be rather random, let’s just leave this
script as is. Kk? Kewl.

If you want to expand it, an interesting programming problem is to break
each word down into individual letters, then randomly change lowercase to
uppercase
or vice versa, so you get those great ransom-note-style WeiRD LeTtEr
pHrASes.

Next time, I plan to move on, however, and look at the great command-line
tool youtube-dl, exploring how to use it to download videos and even
just the audio tracks as MP3 files.

Read More

Help Canonical Test GNOME Patches, Android Apps Illegally Tracking Kids, MySQL 8.0 Released and More

News briefs for April 19, 2018.

Help Canonical test the GNOME desktop memory leak fixes in
Ubuntu 18.04 LTS (Bionic Beaver) by downloading and installing the
current daily ISO for your hardware from here: http://cdimage.ubuntu.com/daily-live/current/bionic-desktop-amd64.iso.
Then download the patched version of gjs, install, reboot, and then
just use your desktop normally. If performance seems impacted by the new
packages, re-install from the ISO again, but don’t install the new packages
and see if things are better. See the Ubuntu
Community page
for more detailed instructions.

Thousands of Android apps downloaded from the Google Play store may be tracking kids’ data illegally, according
to a new study.
NBC News reports: “Researchers at the University of California’s International Computer
Science Institute analyzed 5,855 of the most downloaded kids apps,
concluding that most of them are ‘are potentially in violation’ of the
Children’s Online Privacy Protection Act 1998, or COPPA, a federal law
making it illegal to collect personally identifiable data on children under
13.”

MySQL 8.0 has been released. This new version “includes significant performance,
security and developer productivity improvements enabling the next
generation of web, mobile, embedded and Cloud applications.” MySQL 8.0
features include MySQL document store, transactional data dictionary, SQL
roles, default to utf8mb4 and more. See the white
paper
for all the details.

KDE announced
this morning that
KDE Applications 18.04.0 are now available. New features include
improvements to panels in the Dolphin file manager; Wayland support for
KDE’s JuK music player; improvements to Gwenview, KDE’s image viewer and
organizer; and more.

Collabora Productivity, “the driving force behind putting LibreOffice in
the cloud”, announced
a new release of its enterprise-ready cloud document suite—Collabora
Online 3.2. The new release includes implemented chart creation, data
validation in Calc, context menu spell-checking and more.

Read More

An Update on Linux Journal

An Update on Linux Journal

Image
Linux Journal magazine covers

Carlie Fairchild
Wed, 04/18/2018 – 12:41

So many of you have asked how to help Linux Journal continue to be published* for years to come.

First, keep the great ideas coming—we all want to continue making Linux Journal 2.0 something special, and we need this community to do it.

Second, subscribe or renew. Magazines have a built-in fundraising program: subscriptions. It’s true that most magazines don’t survive on subscription revenue alone, but having a strong subscriber base tells Linux Journal, prospective authors, and yes, advertisers, that there is a community of people who support and read the magazine each month.

Third, if you prefer reading articles on our website, consider becoming a Patron. We have different Patreon reward levels, one even gets your name immortalized in the pages of Linux Journal.

Fourth, spread the word within your company about corporate sponsorship of Linux Journal. We as a community reject tracking, but we explicitly invite high-value advertising that sponsors the magazine and values readers. This is new and unique in online publishing, and just one example of our pioneering work here at Linux Journal.  

Finally, write for us! We are always looking for new writers, especially now that we are publishing more articles more often.
 

With all our gratitude,

Your friends at Linux Journal

 

*We’d be remiss to not acknowledge or thank Private Internet Access for saving the day and bringing Linux Journal back from the dead. They are incredibly supportive partners and sincerely, we can not thank them enough for keeping us going. At a certain point however, Linux Journal has to become sustainable on its own.

Read More

Rise of the Tomb Raider Comes to Linux Tomorrow, IoT Developers Survey, New Zulip Release and More

News briefs for April 18, 2018.

Rise of the Tomb Raider: 20 Year Celebration comes to Linux tomorrow! A minisite
dedicated to Rise of the Tomb Raider
is available now from Feral
Interactive, and you also can view the trailer on Feral’s
YouTube channel.

Zulip 1.8, the open-source team chat software, announces the
release of Zulip Server 1.8
. This is a huge release, with more than 3500 new
commits since the last release in October 2017. Zulip “is an alternative to
Slack, HipChat, and IRC. Zulip combines the immediacy of chat with the
asynchronous efficiency of email-style threading, and is 100% free and
open-source software”.

The IoT
Developers Survey 2018
is now available. The survey was sponsored by
the Eclipse IoT Working Group, Agile IoT, IEEE and the Open Mobile Alliance
“to better understand how developers are building IoT solutions”. The survey
covers what people are building, key IoT concerns, top IoT programming languages
and distros, and more.

Google released Chrome 66 to its stable channel for desktop/mobile users.
This release includes many security improvements as well as new JavaScript
APIs. See the Chrome Platform
Status
site for details.

openSUSE Leap 15 is scheduled
for release
May 25, 2018. Leap 15 “shares a common core with SUSE Linux Enterprise (SLE) 15 sources and
has thousands of community packages on top to meet the needs of professional
and semi-professional users and their workloads.”

GIMP 2.10.0 RC 2 has been released.
This release fixes 44 bugs and introduces important performance
improvements. See the complete list of changes here.

Read More

Create Dynamic Wallpaper with a Bash Script

Create Dynamic Wallpaper with a Bash Script

Image
screenshot

Patrick Wheelan
Wed, 04/18/2018 – 09:58


Harness the power of bash and learn how to scrape websites for exciting
new images every morning.

So, you want a cool dynamic desktop wallpaper without dodgy programs
and a million viruses? The good news is, this is Linux, and anything is
possible. I started this project because I was bored of my standard OS
desktop wallpaper, and
I have slowly created a plethora of scripts to pull images from several
sites and set them as my desktop background. It’s a nice little addition
to my day—being greeted by a different cat picture or a panorama of
a country I didn’t know existed. The great news is that it’s easy to
do, so
let’s get started.

Why Bash?

BAsh (The Bourne Again shell) is standard across almost all *NIX systems
and provides a wide range of operations “out of the box”, which would
take time and copious lines of code to achieve in a conventional coding
or even scripting language. Additionally, there’s no need to re-invent
the wheel. It’s much easier to use somebody else’s program to download
webpages for example, than to deal with low-level system sockets
in C.

How’s It Going to Work?

The concept is simple. Choose a site with images you like and
“scrape” the page for those images. Then once you have a direct link,
you download them and set them as the desktop wallpaper using the display
manager. Easy right?

A Simple Example: xkcd

To start off, let’s venture to every programmer’s second-favorite page
after Stack Overflow: xkcd.
Loading the page, you should be greeted by the
daily comic strip and some other data.

Now, what if you want to see
this comic without venturing to the xkcd site? You need a script to do
it for you. First, you need to know how
the webpage looks to the computer, so download it and take a
look. To do this, use wget, an easy-to-use, commonly
installed, non-interactive, network downloader.
So, on the command
line, call wget, and give it the link to the page:


user@LJ $: wget https://www.xkcd.com/


--2018-01-27 21:01:39--  https://www.xkcd.com/
Resolving www.xkcd.com... 151.101.0.67, 151.101.192.67,
 ↪151.101.64.67, ...
Connecting to www.xkcd.com|151.101.0.67|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2606 (2.5K) [text/html]
Saving to: 'index.html'

index.html                                  100%
[==========================================================>]
2.54K  --.-KB/s    in 0s

2018-01-27 21:01:39 (23.1 MB/s) - 'index.html' saved [6237]

As you can see in the output, the page has been saved to index.html
in your current directory. Using your favourite editor, open it
and take a look (I’m using nano for this example):


user@LJ $: nano index.html

Now you might realize, despite this being a rather bare page, there’s a
lot of code in that file. Instead of going through it all, let’s use
grep, which is perfect for this task. Its sole function is
to print lines matching your search. Grep uses the syntax:


user@LJ $: grep [search] [file]

Looking at the daily comic, its current title is “Night Sky”. Searching
for “night” with grep yields the following results:


user@LJ $: grep "night" index.html

Night Sky
Image URL (for hotlinking/embedding):
 ↪https://imgs.xkcd.com/comics/night_sky.png

The grep search has returned two image links in the file, each related to
“night”. Looking at those two lines, one is the image in the page, and
the other is for hotlinking and is already a usable link. You’ll
be obtaining the first link, however, as it is more representative
of other pages that don’t provide an easy link, and it serves as a good
introduction to the use of grep and cut.

To get the first link out of the page, you first need to identify it in
the file programmatically. Let’s try grep again, but this time instead
of using a string you already know (“night”), let’s approach as if you
know nothing about the page. Although the link will be different, the
HTML should remain the same; therefore, <img src= always should appear
before the link you want:


user@LJ $: grep "img src=" index.html

xkcd.com logo
Selected Comics

It looks like there are three images on the page.
Comparing these results from the first grep, you’ll see that
<img
src=”//imgs.xkcd.com/comics/night_sky.png” has been returned again. This
is the image you want, but how do you separate it from the other two? The
easiest way is to pass it through another grep. The other two links
contain “/s/”; whereas the link we want contains “/comics/”. So,
you need to grep the output of the last command for “/comics/”. To pass
along the output of the last command, use the pipe character (|):


user@LJ $: grep "img src=" index.html | grep "/comics/"


And, there’s the line! Now you just need to separate the image link from
the rest of it with the cut command. cut
uses the syntax:


user@LJ $: cut [-d  delimeter] [-f field] [-c characters]

To cut the link from the rest of the line, you’ll want to cut next to the
quotation mark and select the field before the next quotation mark. In
other words, you want the text between the quotes, or the link, which is
done like this:


user@LJ $: grep "img src=" index.html | grep "/comics/" |
 ↪cut -d" -f2

//imgs.xkcd.com/comics/night_sky.png

And, you’ve got the link. But wait! What about those pesky forward slashes at the
beginning? You can cut those out too:


user@LJ $: grep "img src=" index.html | grep "/comics/" |
 ↪cut -d" -f 2 | cut -c 3-

imgs.xkcd.com/comics/night_sky.png

Now you’ve just cut the first three characters from the line, and you’re
left with a link straight to the image. Using wget again,
you can download
the image:


user@LJ $: wget imgs.xkcd.com/comics/night_sky.png


--2018-01-27 21:42:33--  http://imgs.xkcd.com/comics/night_sky.png
Resolving imgs.xkcd.com... 151.101.16.67, 2a04:4e42:4::67
Connecting to imgs.xkcd.com|151.101.16.67|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 54636 (53K) [image/png]
Saving to: 'night_sky.png'

night_sky.png                               100%
[===========================================================>]
53.36K  --.-KB/s    in 0.04s

2018-01-27 21:42:33 (1.24 MB/s) - 'night_sky.png'
 ↪saved [54636/54636]

Now you have the image in your directory, but its name will change when the
comic’s name changes. To fix that, tell wget to save it with
a specific name:


user@LJ $: wget "$(grep "img src=" index.html | grep "/comics/"
 ↪| cut -d" -f2 | cut -c 3-)" -O wallpaper
--2018-01-27 21:45:08--  http://imgs.xkcd.com/comics/night_sky.png
Resolving imgs.xkcd.com... 151.101.16.67, 2a04:4e42:4::67
Connecting to imgs.xkcd.com|151.101.16.67|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 54636 (53K) [image/png]
Saving to: 'wallpaper'

wallpaper                                   100%
[==========================================================>]
53.36K  --.-KB/s    in 0.04s

2018-01-27 21:45:08 (1.41 MB/s) - 'wallpaper' saved [54636/54636]

The -O option means that the downloaded image now has been saved as
“wallpaper”. Now that you know the name of the image, you can set it as
a wallpaper. This varies depending upon which display manager you’re
using. The most popular are listed below, assuming the image is located
at /home/user/wallpaper.

GNOME:


gsettings set org.gnome.desktop.background picture-uri
 ↪"File:///home/user/wallpaper"
gsettings set org.gnome.desktop.background picture-options
 ↪scaled

Cinnamon:


gsettings set org.cinnamon.desktop.background picture-uri
 ↪"file:///home/user/wallpaper"
gsettings set org.cinnamon.desktop.background picture-options
 ↪scaled

Xfce:


xfconf-query --channel xfce4-desktop --property
 ↪/backdrop/screen0/monitor0/image-path --set
 ↪/home/user/wallpaper

You can set your wallpaper now, but you need different images to mix in.
Looking at the webpage, there’s a “random” button that takes you
to a random comic. Searching with grep for “random” returns the following:


user@LJ $: grep random index.html

  • Random
  • Random
  • This is the link to a random comic, and downloading it with
    wget and
    reading the result, it looks like the initial comic page. Success!

    Now that you’ve got all the components, let’s put them together into a
    script, replacing www.xkcd.com with the new c.xkcd.com/random/comic/:

    
    #!/bin/bash
    
    wget c.xkcd.com/random/comic/
    
    wget "$(grep "img src=" index.html | grep /comics/ | cut -d"
     ↪-f 2 | cut -c 3-)" -O wallpaper
    
    gsettings set org.gnome.desktop.background picture-uri
     ↪"File:///home/user/wallpaper"
    gsettings set org.gnome.desktop.background picture-options
     ↪scaled
    
    

    All of this should be familiar except the first line, which designates
    this as a bash script, and the second wget command. To capture the output
    of commands into a variable, you use $(). In this case,
    you’re capturing
    the grepping and cutting process—capturing the final link and then
    downloading it with wget. When the script is run, the commands inside
    the bracket are all run producing the image link before wget is called
    to download it.

    There you have it—a simple example of a dynamic wallpaper that you can run
    anytime you want.

    If you want the script to run automatically, you can add a cron job
    to have cron run it for you. So, edit your cron tab with:

    
    user@LJ $: crontab -e
    
    

    My script is called “xkcd”, and my crontab entry looks like this:

    
    @reboot /bin/bash /home/user/xkcd
    
    

    This will run the script (located at /home/user/xkcd) using bash,
    every restart.

    Reddit

    The script above shows how to search for images in HTML code and download
    them. But, you can apply this to any website of your choice—although the
    HTML code will be different, the underlying concepts remain the same. With
    that in mind, let’s tackle downloading images from Reddit.
    Why Reddit?
    Reddit is possibly the largest blog on the internet and the
    third-most-popular site in the US. It aggregates content from many
    different communities together onto one site. It does this through use
    of “subreddits”, communities that join together to form
    Reddit. For the purposes of this article, let’s focus on subreddits (or
    “subs”
    for short) that primarily deal with images. However, any subreddit, as
    long as it allows images, can be used in this script.

    Screenshot

    Figure 1. Scraping the Web Made Simple—Analysing Web Pages in
    a Terminal

    Diving In

    Just like the xkcd script, you need to download the web page from a
    subreddit to analyse it. I’m using reddit.com/r/wallpapers for this
    example. First, check for images in the HTML:

    
    user@LJ $: wget https://www.reddit.com/r/wallpapers/ && grep
     ↪"img src=" index.html
    
    --2018-01-28 20:13:39--  https://www.reddit.com/r/wallpapers/
    Resolving www.reddit.com... 151.101.17.140
    Connecting to www.reddit.com|151.101.17.140|:443... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 27324 (27K) [text/html]
    Saving to: 'index.html'
    
    index.html                                  100%
    [==========================================================>]
    26.68K  --.-KB/s    in 0.1s
    
    2018-01-28 20:13:40 (270 KB/s) - 'index.html' saved [169355]
    
    
    a community
    ↪for 9 years

    ↪….Forever and ever……

    — SNIP —

    All the images have been returned in one long line, because the
    HTML for the images is also in one long line. You need to split
    this one long line into the separate image links. Enter Regex.

    Regex is short for regular expression, a system used by many programs to
    allow users to match an expression to a string. It contains wild cards,
    which are special characters that match certain characters. For
    example, the * character will match every character. For this example,
    you want an expression that matches every link in the HTML file. All HTML
    links have one string in common. They all take the form href=”LINK”. Let’s
    write a regex expression to match:

    
    href="([^"#]+)"
    
    

    Now let’s break it down:

    • href=” — simply states that the first characters should match these.

    • () — forms a capture group.

    • [^] — forms a negated set. The string shouldn’t match
      any of the characters inside.

    • + — the string should match one or more of
      the preceding tokens.

    Altogether the regex matches a string that begins href=”, doesn’t
    contain any quotation marks or hashtags and finishes with a quotation mark.

    This regex can be used with grep like this:

    
    user@LJ $: grep -o -E 'href="([^"#]+)"' index.html
    
    href="/static/opensearch.xml"
    href="https://www.reddit.com/r/wallpapers/"
    href="//out.reddit.com"
    href="//out.reddit.com"
    href="//www.redditstatic.com/desktop2x/img/favicon/
    ↪apple-icon-57x57.png"
    
    --- SNIP ---
    
    

    The -e options allow for extended regex options, and the
    -o switch means
    grep will print only patterns exactly matching and not the
    whole line. You
    now have a much more manageable list of links. From there, you can use the
    same techniques from the first script to extract the links and filter
    for images. This looks like the following:

    
    user@LJ $: grep -o -E 'href="([^"#]+)"' index.html | cut -d'"'
     ↪-f2 | sort | uniq | grep -E '.jpg|.png'
    
    
    View post on imgur.com
    View post on imgur.com
    View post on imgur.com
    https://i.redd.it/s8ngtz6xtnc01.jpg //www.redditstatic.com/desktop2x/img/favicon/ ↪android-icon-192x192.png //www.redditstatic.com/desktop2x/img/favicon/ ↪apple-icon-114x114.png //www.redditstatic.com/desktop2x/img/favicon/ ↪apple-icon-120x120.png //www.redditstatic.com/desktop2x/img/favicon/ ↪apple-icon-144x144.png //www.redditstatic.com/desktop2x/img/favicon/ ↪apple-icon-152x152.png //www.redditstatic.com/desktop2x/img/favicon/ ↪apple-icon-180x180.png //www.redditstatic.com/desktop2x/img/favicon/ ↪apple-icon-57x57.png //www.redditstatic.com/desktop2x/img/favicon/ ↪apple-icon-60x60.png //www.redditstatic.com/desktop2x/img/favicon/ ↪apple-icon-72x72.png //www.redditstatic.com/desktop2x/img/favicon/ ↪apple-icon-76x76.png //www.redditstatic.com/desktop2x/img/favicon/ ↪favicon-16x16.png //www.redditstatic.com/desktop2x/img/favicon/ ↪favicon-32x32.png //www.redditstatic.com/desktop2x/img/favicon/ ↪favicon-96x96.png

    The final grep uses regex again to match .jpg or .png. The | character
    acts as a boolean OR operator.

    As you can see, there are four matches for actual images: two .jpgs and
    two .pngs. The others are Reddit default images, like the logo. Once
    you remove those images, you’ll have a final list of images to set
    as a wallpaper. The easiest way to remove these images from the list is
    with sed:

    
    user@LJ $: grep -o -E 'href="([^"#]+)"' index.html | cut -d'"'
     ↪-f2 | sort | uniq | grep -E '.jpg|.png' | sed /redditstatic/d
    
    
    View post on imgur.com
    View post on imgur.com
    View post on imgur.com
    https://i.redd.it/s8ngtz6xtnc01.jpg

    sed works by matching what’s between the two
    forward slashes. The d
    on the end tells sed to delete the lines that match the pattern, leaving
    the image links.

    The great thing about sourcing images from Reddit is that every subreddit
    contains nearly identical HTML; therefore, this small script will work on
    any subreddit.

    Creating a Script

    To create a script for Reddit, it should be possible to choose from which
    subreddits you’d like to source images. I’ve created a
    directory for my script and placed a file called “links” in the directory with
    it. This file contains the subreddit links in the following format:

    
    https://www.reddit.com/r/wallpapers
    https://www.reddit.com/r/wallpaper
    https://www.reddit.com/r/NationalPark
    https://www.reddit.com/r/tiltshift
    https://www.reddit.com/r/pic
    
    

    At run time, I have the script read the list and download these subreddits
    before stripping images from them.

    Since you can have only one image at a time as desktop wallpaper, you’ll want to narrow
    down the selection of images to just one. First, however, it’s best to
    have a wide range of images without using a lot of bandwidth. So you’ll
    want to
    download the web pages for multiple subreddits and strip the image
    links but not download the images themselves. Then you’ll use a random selector to
    select one image link and download that one to use as a wallpaper.

    Finally, if you’re downloading lots of subreddit’s web pages, the script will
    become very slow. This is because the script waits for each command to
    complete before proceeding. To circumvent this, you can fork
    a command by appending an ampersand (&) character. This creates a new
    process for the command, “forking” it from the main process (the script).

    Here’s my fully annotated script:

    
    #!/bin/bash
    
    DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
     ↪# Get the script's current directory
    
    linksFile="links"
    
    mkdir $DIR/downloads
    cd $DIR/downloads
    
    # Strip the image links from the html
    function parse {
    grep -o -E 'href="([^"#]+)"' $1 | cut -d'"' -f2 | sort | uniq
     ↪| grep -E '.jpg|.png' >> temp
    grep -o -E 'href="([^"#]+)"' $2 | cut -d'"' -f2 | sort | uniq
     ↪| grep -E '.jpg|.png' >> temp
    grep -o -E 'href="([^"#]+)"' $3 | cut -d'"' -f2 | sort | uniq
     ↪| grep -E '.jpg|.png' >> temp
    grep -o -E 'href="([^"#]+)"' $4 | cut -d'"' -f2 | sort | uniq
     ↪| grep -E '.jpg|.png' >> temp
    }
    
    # Download the subreddit's webpages
    function download {
    rname=$( echo $1 | cut -d / -f 5  )
    tname=$(echo t.$rname)
    rrname=$(echo r.$rname)
    cname=$(echo c.$rname)
    wget --load-cookies=../cookies.txt -O $rname $1
     ↪&>/dev/null &
    wget --load-cookies=../cookies.txt -O $tname $1/top
     ↪&>/dev/null &
    wget --load-cookies=../cookies.txt -O $rrname $1/rising
     ↪&>/dev/null &
    wget --load-cookies=../cookies.txt -O $cname $1/controversial
     ↪&>/dev/null &
    wait # wait for all forked wget processes to return
    parse $rname $tname $rrname $cname
    }
    
    
    # For each line in links file
    while read l; do
       if [[ $l != *"#"* ]]; then # if line doesn't contain a
     ↪hashtag (comment)
            download $l&
       fi
    done > $DIR/log # save image into log in case
     ↪we want it later
    
    wget -b $wallpaper -O $DIR/wallpaperpic 1>/dev/null # Download
     ↪wallpaper image
    
    gsettings set org.gnome.desktop.background picture-uri
     ↪file://$DIR/wallpaperpic # Set wallpaper (Gnome only!)
    
    
    rm -r $DIR/downloads # cleanup
    
    

    Just like before, you can set up a cron job to run the script for you at
    every reboot or whatever interval you like.

    And, there you have it—a fully functional cat-image harvester. May your morning
    logins be greeted with many furry faces. Now go forth and discover new
    subreddits to gawk at and new websites to scrape for cool wallpapers.

    Read More

    Cooking With Linux (without a net): A CMS Smorgasbord

    Cooking With Linux (without a net): A CMS Smorgasbord

    by Marcel Gagné

    Note : You are watching a recording of a live show. It’s Tuesday and that means it’s time for Cooking With Linux (without a net), sponsored and supported by Linux Journal. Today, I’m going to install four popular content management systems. These will be Drupal, Joomla, WordPress, and Backdrop. If you’re trying to decide on what your next CMS platform should be, this would be a great time to tune in. And yes, I’ll do it all live, without a net, and with a high probability of falling flat on my face. Join me today, at 12 noon, Easter Time. Be part of the conversation.

    Content management systems covered include:

    Load Disqus comments

    Our discussions are powered by Disqus, which require JavaScript.

    Read More

    The Agony and the Ecstasy of Cloud Billing

    The Agony and the Ecstasy of Cloud Billing

    Image

    Corey Quinn
    Tue, 04/17/2018 – 09:40

    Cloud billing is inherently complex; it’s not just you.

    Back in the mists of antiquity when I started reading Linux Journal,
    figuring out what an infrastructure was going to cost was (although still
    obnoxious in some ways) straightforward. You’d sign leases with colocation
    providers, buy hardware that you’d depreciate on a schedule and
    strike a deal in blood with a bandwidth provider, and you were
    more or less set until something significant happened to your scale.

    In today’s brave new cloud world, all of that goes out the window. The
    public cloud providers give with one hand (“Have a full copy of any
    environment you want, paid by the hour!”), while taking with the other (“A
    single Linux instance will cost you $X per hour, $Y per GB transferred
    per month, and $Z for the attached storage; we simplify this pricing
    into what we like to call ‘We Make It Up As We Go Along'”).

    In my day job, I’m a consultant who focuses purely on analyzing and
    reducing the Amazon Web Services (AWS) bill. As a result, I’ve seen a
    lot of environments doing different things: cloud-native shops spinning
    things up without governance, large enterprises transitioning into
    the public cloud with legacy applications that don’t exactly support
    that model without some serious tweaking, and cloud migration projects
    that somehow lost their way severely enough that they were declared
    acceptable as they were, and the “multi-cloud” label was slapped on to
    them. Throughout all of this, some themes definitely have emerged that
    I find that people don’t intuitively grasp at first. To wit:

    • It’s relatively straightforward to do the basic arithmetic to figure
      out what a current data center would cost to put into the cloud as
      is—generally it’s a lot! If you do a 1:1 mapping of your existing data center
      into the cloudy equivalents, it invariably will cost more; that’s a
      given. The real cost savings arise when you start to take advantage
      of cloud capabilities—your web server farm doesn’t need to have 50
      instances at all times. If that’s your burst load, maybe you can scale
      that in when traffic is low to five instances or so? Only once you fall into
      a pattern (and your applications support it!) of paying only for what
      you need when you need it do the cost savings of cloud become apparent.

    • One of the most misunderstood aspects of Cloud Economics is the proper
      calculation of Total Cost of Ownership, or TCO. If you want to do a
      break-even analysis on whether it makes sense to build out a storage
      system instead of using S3, you’ve got to include a lot more than just
      a pile of disks. You’ve got to factor in disaster recovery equipment
      and location, software to handle replication of data, staff to run the
      data center/replace drives, the bandwidth to get to the storage from
      where it’s needed, the capacity planning for future growth—and the
      opportunity cost of building that out instead of focusing on product
      features.

    • It’s easy to get lost in the byzantine world of cloud billing
      dimensions and lose sight of the fact that you’ve got staffing
      expenses. I’ve yet to see a company with more than five employees wherein
      the cloud expense wasn’t dwarfed by payroll. Unlike the toy projects
      some of us do as labors of love, engineering time costs a lot of money.
      Retraining existing staff to embrace a cloud future takes time, and not
      everyone takes to this new paradigm quickly.

    • Accounting is going to have to weigh in on this, and if you’re not
      prepared for that conversation, it’s likely to be unpleasant. You’re going
      from an old world where you could plan your computing expenses a few years
      out and be pretty close to accurate. Cloud replaces that with a host of
      variables to account for, including variable costs depending upon load,
      amortization of Reserved Instances, provider price cuts and a complete
      lack of transparency with regard to where the money is actually going
      (Dev or Prod? Which product? Which team spun that up? An engineer left the
      company six months ago, but their 500TB of data is still sitting there and so
      on).

    The worst part is that all of this isn’t apparent to newcomers to cloud
    billing, so when you trip over these edge cases, it’s natural to feel as
    if the problem is somehow your fault. I do this for a living, and I was
    stymied trying to figure out what data transfer was likely to cost in
    AWS. I started drawing out how it’s billed to customers, and ultimately
    came up with the “AWS Data Transfer Costs” diagram shown in Figure 1.


    Figure 1. A convoluted mapping of how AWS data transfer is priced
    out.

    If you can memorize those figures, you’re better at this than I am by a
    landslide! It isn’t straightforward, it’s not simple, and it’s certainly
    not your fault if you don’t somehow intrinsically know these things.

    That said, help is at hand. AWS billing is getting much more
    understandable, with the advent of such things as free Reserved Instance
    recommendations, the release of the Cost Explorer API and the rise of
    serverless technologies. For their part, Google’s GCP and Microsoft’s
    Azure learned from the early billing stumbles of AWS, and as a result,
    both have much more understandable cost structures. Additionally, there
    are a host of cost visibility Platform as a Service offerings out there;
    they all do more or less the same things as one another, but they’re great
    for ad-hoc queries around your bill. If you’d rather build something you
    can control yourself, you can shove your billing information from all providers
    into an SQL database and run something like QuickSight or Tableau on top
    of it to aide visualization, as many shops do today.

    In return for this ridiculous pile of complexity, you get something
    rather special—the ability to spin up resources on-demand, for as
    little time as you need them, and pay only for the things that you
    use. It’s incredible as a learning resource alone—imagine how much
    simpler it would have been in the late 1990s to receive a working Linux
    VM instead of having to struggle with Slackware’s installation for the
    better part of a week. The cloud takes away, but it also gives.

    Read More