Use a Raspberry Pi 3 as an access point

 

raspberry-pi-logo

 

Raspberry Pis are awesome [citation needed].

This post is about how to setup a WiFi with a Raspberry Pi 3. It describes what packages you have to install and one example is shown how to configure them. In the end you will have an Raspberry Pi 3, which is connected through ethernet to the internet. The Pi provides an SSID and takes care that the traffic between WiFi and Ethernet is forwarded.

This tutorial basically follows the instructions on http://elinux.org/RPI-Wireless-Hotspot, except that it uses dnsmasq instead of udhcpd.

Steps

Operating system

Download and install an operating system for the Raspberry Pi. I used “Raspbian” and followed this description.

https://www.raspberrypi.org/documentation/installation/installing-images/mac.md

Before you unmount the flashed card, create a file named ssh in the boot segment on the disk. Otherwise you won’t be able to SSH into the Raspberry Pi.

Installations

Connect the Pi to your local network (through ethernet), search for the little rascal (i.e. using nmap) and connect to it via ssh.

When logged in, you will have to install at least 2 packages: dnsmasq and hostapd. I always love to have vim, so here’s what I did:

sudo apt-get update
sudo apt-get install vim
sudo apt-get install dnsmasq
sudo apt-get install hostapd

Configure the wlan interface

Now, let’s edit the iface wlan0 part in /etc/network/interfaces, make sure it is static and has following properties:

allow-hotplug wlan0
iface wlan0 inet static
address 10.0.0.1
netmask 255.255.255.0
wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf

Behold, that I used the address 10.0.0.1 as static IP. We will have to use the same IP for the DHCP configuration.

At this point you should quickly restart the networking service.

sudo service networking restart

ifconfig wlan0 should then show the applied changes on in the wlan0 interface.

Configure DNSmasq

The Pi will have to manage the clients IP address (DHCP) on the wlan0 interface. I used DNSmasq for the DHCP server, but it should work fine with any other DHCP servers.

However, let’s edit /etc/dnsmasq.con

domain-needed
bogus-priv
interface=wlan0
listen-address=10.0.0.1
dhcp-range=10.0.0.2,10.0.0.254,12h
dhcp-option=option:router,10.0.0.1
dhcp-authoritative

Note that the Pi’s static IP address is used for listen-address and dhcp-option=option:router. For more information about that, consider reading http://www.thekelleys.org.uk/dnsmasq/doc.html. ;-)

Portforwarding (route wlan0 to eth0)

The next step affects iptables. I am no expert in this, so I basically just copy pasted that stuff and ensured that the in -i and out -o parameters made sense.

sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
sudo iptables -A FORWARD -i eth0 -o wlan0 -m state --state RELATED,ESTABLISHED -j ACCEPT
sudo iptables -A FORWARD -i wlan0 -o eth0 -j ACCEPT

In a nutshell, it allows that general traffic/communication is allowed between the interfaces wlan0 (wireless) and eth0 (ethernet).
In order that the iptable rules apply immediately, you’ll have to do this:

sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"

In order that the iptable rules are considered after reboot, edit /etc/sysctl.conf, and uncomment this line:

net.ipv4.ip_forward=1

Finally persist the iptable rules, otherwise they get truncated after reboot. I used a package iptables-persistent which persists the rules right during installation which is pretty convenient.

sudo apt-get install iptables-persistent

Configure the access point

Now it get’s interesting. We can create our own SSID and define a password.
Therefore create /etc/hostapd/hostapd.conf and paste and save this:

interface=wlan0
driver=nl80211
ssid=SIMPLIFICATOR-WIFI
hw_mode=g
channel=6
macaddr_acl=0
auth_algs=1
ignore_broadcast_ssid=0
wpa=2
wpa_passphrase=YOUR-INCREDIBLY-SECURE-PASSWORD
wpa_key_mgmt=WPA-PSK
#wpa_pairwise=TKIP  # You better do not use this weak encryption (only used by old client devices)
rsn_pairwise=CCMP
ieee80211n=1          # 802.11n support
wmm_enabled=1         # QoS support
ht_capab=[HT40][SHORT-GI-20][DSSS_CCK-40]

Let’s connect the above config to the default hostapd config, edit /etc/default/hostapd and make sure DAEMON_CONF is uncommented and points to the config file.

DAEMON_CONF="/etc/hostapd/hostapd.conf"

Services (hostapd & dnsmasq)

Lastly, let’s restart the services and enable them, so that the start automatically on boot.

sudo service hostapd restart
sudo service dnsmasq restart
sudo update-rc.d hostapd enable
sudo update-rc.d dnsmasq enable

That’s it

You should now see a WiFi named SIMPLIFICATOR-WIFI and connect to it using the passphrase YOUR-INCREDIBLY-SECURE-PASSWORD, or whatever values you have given it.

Insights

While writing the blog post I had several insights:

  • Raspberry Pi 3 comes with an 2.4 GHz 802.11n (150 Mbit/s) WiFi. It’s always good to know the limits of the bandwidth.
  • Even if you used a WiFi USB adapater with 1000 Mbit/s, the maximum speed would be 480 Mbit/s because of the USB 2 interface (!)
  • I wasn’t able to configure the Pi, so that two WiFi dongles run simultaneously, so that you could extend the range of an existing WiFi without having the Pi connected to an ethernet cable.

Vaults with Ansible

When it comes to software versioning, you normally do not want to upload passwords or secrets into shared repositories. Too many people might have access to the code, and it’s irresponsible to have secrets there without protection.

On the other hand, you actually do want to share such secrets among certain co-workers (the “circle of trust”, implying that all other co-workers are not trustworthy 😉).

So, what we want are “protected” secrets in our versioning control system, that only the circle of trust has access to.

We are going to identify our files to be protect and encrypt them with Ansible. The encryption bases on a password, that we share with the people who may know our secrets. So, this password is chosen once and used for the same file “forever”.

Encrypt 🔐

Let’s say we store our secrets in a file named secrets.yml, and the content looks like this

favorite_artists:
- Lady Gaga
- Justin Bieber
- Kanye West

Obviusly no one should ever know that we like those artists, but the circle of trust may know, if necessary.

Now we can use ansible-vault encrypt to encrypt our secrets.

pi@raspberrypi:~ $ cat ./secrets.yml
favorite_artists:
- Lady Gaga
- Justin Bieber
- Kanye West

pi@raspberrypi:~ $ ansible-vault encrypt ./secrets.yml
Vault password:   # enter a vault password here
Encryption successful

pi@raspberrypi:~ $ cat ./secrets.yml
$ANSIBLE_VAULT;1.1;AES256
38373634613533646632343139633431313465386136613231316163633965623832313832623830
6537656536393339626161616632633062656161346630360a653833373033643565313632386338
34623537393861623236666132356231656165393033633035333338306436376563383234383030
3330346664326339300a313565313933333464643436353130363539666534323634346439636433
33396636353461653436613764373861396133623833386436303536636363333737653136656165
31643164303564373861343239643038656161346562343236323761663335363465633833363436
61373966343633663531653932326239346438626330653265343739646561346431323966313132
64626134356535366562

Behold where it asks to enter a vault password (# enter a vault password here). We’ve chosen a wise, complicated password (= foo), and can now share this with the people in the circle of trust.

Further, we can check in secrets.yml and upload it to our versioning control system.

Decrypt

Of course, at some point we will have to decrypt secrets.yml, we do this:

pi@raspberrypi:~ $ ansible-vault decrypt ./secrets.yml
Vault password:    # enter the vault password here
Decryption successful

pi@raspberrypi:~ $ cat ./secrets.yml
favorite_artists:
- Lady Gaga
- Justin Bieber
- Kanye West

That’s the whole magic.

One more thing

Don’t be confused that you’ll get different contents of encrypted files, without changing the original content (and same vault password).

Eencrypt the file with foo twice, save the corresponding outputs to ./secrets1.yml and ./secrets\2.yml

pi@raspberrypi:~ $ cat ./secrets.yml
favorite_artists:
- Lady Gaga
- Justin Bieber
- Kanye West

pi@raspberrypi:~ $ ansible-vault encrypt ./secrets.yml --output=./secrets1.yml
Vault password:  # "foo" goes here
Encryption successful

pi@raspberrypi:~ $ ansible-vault encrypt ./secrets.yml --output=./secrets2.yml
Vault password:  # "foo" goes here too
Encryption successful

Compare the files: secrets1.yml and secrets2.yml

pi@raspberrypi:~ $ cat ./secrets1.yml
$ANSIBLE_VAULT;1.1;AES256
39356232653735336132323762643366336530666334333039373265336334373635336665643965
3230336463613962363730393530316566313432613761650a636666623132323462323466613164
62316434663763613637666133626536633639616362313236383964363331616436353331363631
3336343339363733390a343034616365323163346231303065393065313039373837393264363361
35343961623165383037626231333061316263626431623361323164333235393835363262363438
61626433323032323261376261303536313534663861623638383235343566353532393736396464
65326337346562633330366134633731643930323364333730316533383432643266373464633863
30356437636633363465

pi@raspberrypi:~ $ cat ./secrets2.yml
$ANSIBLE_VAULT;1.1;AES256
65323662356530333862393965386137666539636262656332323535363934343033363633353831
3738666430363738386465306134316333383734633762350a616433656465343866613766643237
33636537303962366131363965326637333633333161616562346334663134343666666266646264
6166366564313431370a353630363635643865346138613634633833653863376561336638386138
32616536646165313034303938343863316630373731353730326330306231653532306363366634
31376437643539646464636635306365653962666262623637303335613230383133326363383432
65626162303735303863373031396537363837626461613363336537323362653163663735303931
37633961326136663162

Encrypted, they are not identical, but still they can both be decrypted with foo, eventually with the same result.

pi@raspberrypi:~ $  ansible-vault decrypt ./secrets1.yml
Vault password:
Decryption successful

pi@raspberrypi:~ $  ansible-vault decrypt ./secrets2.yml
Vault password:
Decryption successful

pi@raspberrypi:~ $  cat ./secrets1.yml
favorite_artists:
- Lady Gaga
- Justin Bieber
- Kanye West

pi@raspberrypi:~ $  cat ./secrets2.yml
favorite_artists:
- Lady Gaga
- Justin Bieber
- Kanye West

“Sitincator” – Simplificator’s Meeting Room Display

image03

We have two meeting rooms at our Simplificator headquarter office in central Zurich. As they have opaque doors and no windows towards the aisle, it was often unclear whether a meeting room was occupied or not. Frequently, people opened the door and immediately apologized when realizing that there was an ongoing meeting. As an agile company we strive to reduce such nuisances and to improve our efficiency.

We, the “Smooth Operators” team, came up with an idea to improve the situation by mounting a display next to the door of each meeting room showing its occupancy. A 3-day retreat was planned to focus our efforts on this project.

image02

We decided to use a Raspberry Pi 3 with its official touch screen display. This allowed us to not only display information, but to make the system interactive. We started out by brainstorming the functionality we wanted to provide to the user. Most importantly, it should be obvious whether the meeting room was occupied or not. Scheduled meetings of the current day should be visible and we wanted to provide the ability to make a “quick reservation”, i.e. anonymously book the room for 15 or 30min. This feature is quite useful if you want to have a short ad-hoc talk or a quick phone call. As we already schedule meetings in Simplificator’s Google Calendar, we fetch booking data from the Google Calendar API.

After defining the functionality, we created wireframes to clarify how many screens we would have to implement and what information and interactivity they should provide. We ended up having two screens: the main screen showing whether the room is free or busy and a screen showing all scheduled meetings of the current day. As the functionality and the screens were defined, our designer started to layout the screens and define its components graphically. We tested the design on the display of the Raspberry Pi regarding size and colors and performed quick user tests to finetune the behavior.

Each screen has several possible states (e.g. free and busy), so we decided to use an interactive web frontend technology. As retreats at Simplificator offer an educational component as well, we decided to create two versions of the app, one in React and one in Elm. To run the app in a kiosk mode on the Raspberry Pi, we chose to package our app with Electron.

After the three days of retreat we had two basic apps in React and Elm. For future maintainability we decided to go on with the React app. We mounted the Raspberry Pis and their display next to the meeting room doors, installed our app on them and tested for a while. We found some bugs to fix and improvements to implement. The app is now running quite smoothly and our meetings are free of disturbances!

If you want to rebuild this setup at your office as well, you find the required hardware components and a link to the app’s code below. Drop us a line and tell us how it is working out for you!

 

Components:

Source code of the Sitincator app: https://github.com/simplificator/sitincator

Getting Started with Hanami and GraphQL

What is GraphQL?

GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, it gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools.

What is Hanami?

Hanami is a Ruby MVC web framework comprised of many micro-libraries. It has a simple, stable API, a minimal DSL, and prioritises the use of plain objects over magical, over-complicated classes with too much responsibility.

The natural repercussion of using simple objects with clear responsibilities is more boilerplate code. Hanami provides ways to mitigate this extra legwork while maintaining the underlying implementation.

Project setup

If you haven’t already done so, install hanami.

gem install hanami

After hanami is installed on your machine, you can create a new project. Feel free to chose another database or test framework if you like.

hanami new blogs --database=postgres --application-name=api --test=rspec
cd blogs

Define entities

Before we do anything at all, we need entities we can query over our API. Hanami offers a generator for entities which can be invoked by the following command:

hanami generate model author

This will generate an entity and the corresponding test. In this tutorial tests are omitted for brevity but you are encouraged to implement them on your own.

We start out with our author as it’s a very simple model. It has a single attribute ‘name’.

class Author
  include Hanami::Entity

  attributes :name
end

Next we’re going to generate another model. A blog.

hanami generate model blog

For our blog, we want a title, content and an author_id to reference the author.

class Blog
  include Hanami::Entity

  attributes :title, :content, :author_id
end

Update database

To be able to store entities, we need to define database tables to hold data. Create a migration for the author model first:

hanami generate migration create_authors

Hanami will generate a migration file with a current timestamp for you under db/migrations. Open the file and add the following:

Hanami::Model.migration do
  change do
    create_table :authors do
      primary_key :id
      column      :name, String, null: false
    end
  end
end

For blogs, we create another migration named create_blog.

hanami generate migration create_blogs

Inside the migration create another table with columns for our blog:

Hanami::Model.migration do
  change do
    create_table :blogs do
      primary_key :id
      column      :title,    String, null: false
      column      :content,  String, null: false
      foreign_key :author_id, :authors
    end
  end
end

To get the changes to our database, execute

hanami db create
hanami db migrate

In order to be able to run database-backed tests, we need to ensure that the test database uses the same schema as our development database. Update the schema by setting HANAMI_ENV to ‘test’ explicitly:

HANAMI_ENV=test hanami db create
HANAMI_ENV=test hanami db migrate

Now that our database is ready, we can go ahead and define mappings for author and blog. Go to lib/blogs.rb, find the mapping section and add mappings for the new entities.

##
# Database mapping
#
# Intended for specifying application wide mappings.
#
mapping do
  collection :blogs do
    entity      Blog
    repository  BlogRepository

    attribute :id,          Integer
    attribute :title,       String
    attribute :content,     String
    attribute :author_id,   Integer
  end

  collection :authors do
    entity     Author
    repository AuthorRepository

    attribute :id,    Integer
    attribute :name,  String
  end
end

Introducing Types

After having defined our entities, we can now move on to create GraphQL types. First update your Gemfile and add the following line:

gem 'graphql'

and then run

bundle install

We’re going to place type definitions in a dedicated directory to keep them separate from our entities. Furthermore those types are relevant for our web API only and not for the whole application. Create a directory in apps/api/ named ‘types’

 mkdir -p apps/api/types

and update your web app’s application.rb file to include type definitions in the load path.

      load_paths << [         'controllers',         'types'       ] 

Now we’re ready to create our types inside apps/api/types:

 AuthorType = GraphQL::ObjectType.define do   name 'Author'   description 'Author of Blogs'   field :id, types.ID   field :name, types.String   field :blogs, types[!BlogType] end 
 BlogType = GraphQL::ObjectType.define do   name 'Blog'   description 'A Blog'   field :id, types.ID   field :title, types.String   field :content, types.String   field :author do     type AuthorType     resolve -> (obj, _, _) { AuthorRepository.find(obj.author_id) }
  end
end
require_relative 'author_type'
require_relative 'blog_type'

QueryType = GraphQL::ObjectType.define do
  name 'Query'
  description 'The query root for this schema'

  field :blog do
    type BlogType
    argument :id, !types.ID
    resolve -> (_, args, _) { BlogRepository.find(args[:id]) }
  end

  field :author do
    type AuthorType
    argument :id, !types.ID
    resolve -> (_, args, _) { AuthorRepository.find(args[:id]) }
  end
end
require_relative 'query_type'

BlogSchema = GraphQL::Schema.define(query: QueryType)

Notice the ‘require_relative’ statements at the beginning of some files. This is a workaround because, even though defined in the load_path, types don’t seem to be auto loaded inside a type definition file.

… and Action

Now that the schema definitions and load path are set up correctly, it is time to create the action that will serve query requests. To generate a new action invoke the following command:

hanami generate action api graphql#show --skip-view

Since we’re providing the –skip-view flag hanami will not generate a view class and template for this action.
The above command will generate a new action where we place query logic.

module Api::Controllers::Graphql
  class Show
    include Api::Action

    def call(params)
      query_variables = params[:vairables] || {}
      self.body = JSON.generate(BlogSchema.execute(params[:query], variables: query_variables))
    end
  end
end

To let hanami know that it shouldn’t render a view, we set self.body directly inside the action.

Query the API

In order to see the API working, we need data! Fire up your hanami console and create some author and blogs.

hanami c

Now create one or more authors and save it to database via AuthorRepository:

author = Author.new(name: 'John Wayne')
AuthorRepository.persist author

Do the same for Blogs

blog = Blog.new(title: 'first blog', content: 'lorem ipsum dolor sit met', author_id: 1)
BlogRepository.persist blog

As soon as we have our data in place, we can use cURL to query our API.

curl -XGET -d 'query={ blog(id: 1) {  title author { name } }}' http://localhost:2300/graphql

If all goes well you should see a response looking something like this:

{"data":{"blog":{"title":"first blog","author":{"name":"John Wayne"}}}}

Go ahead and play around with the query. If you look at the type definition for QueryType you’ll notice, that it should be possible to query for authors, too. Can you get the API to list all blog titles for a given author?

That’s it. This introduction should give you a glimpse into Hanami and GraphQL. You can find more information in the section below.

Links and references

More information about GraphQL
The Hanami homepage
Github repository for GraphQL-Ruby
Marc-André Giroux’ Blogpost this article is based upon
Source Code from this article.

Learning to Code at Simplificator

When I finished my Master in Economics in September 2014, I didn’t want to take on some random office job where I do the same every day. I wanted a job where I have to learn something every day and where I have to keep up to date with what I do. I then decided to take on a 50% accounting job, in order to make a living, and meanwhile, I decided to learn programming. I started with online tutorials and practised on my own. However, I soon realised that, at some point, I didn’t get any further. I had a basic knowledge about data structures and control structures, but I had no idea where I would have to use them in a real project.

In early 2015, I then looked at different companies and it soon became clear to me that I wanted to work at Simplificator. When I called to ask for an internship, I was told that Simplificator doesn’t have any internship positions. I then thought about applying at a different company, but I really just wanted to work at Simplificator. So I sent an email to Lukas (the CEO), asking again. They then invited me for a short interview and they agreed that I could start a 50% internship the next week.

During my internship, Tobias was my team lead and instructor. He taught me about classes, methods, design patterns and much more. Especially at the beginning, I had to learn a lot of different technologies: Ruby, Rails, SQL, HTML, CSS etc.  Soon, I started my own little project, which was a calorie tracker. The calorie tracker was a very good way to learn new things as the project developed. I started with the backend, so that the business logic was implemented as discussed with the “client”, who was Tobias. The frontend didn’t look nice at all. I only used it to test if my backend works as intended. I then received a design from our designer Marcel, which I had to implement. This was very important, because I knew that it also works like this in real projects.

This is what the calorie tracker looked like after I implemented the design:

screen-shot-2016-09-30-at-14-14-50

Later, I wrote unit tests and integration tests, as well as controller tests for the calorie tracker. As a next step, users were introduced, so several people would be able to use the calorie tracker. This was quite tricky for me, because I had never worked with sessions before. But again, I knew this would be important in real projects, too. Next, there should be a date picker, where the user could jump to the requested day.screen-shot-2016-09-30-at-14-37-20

Another requirement was that the user should be able to add a new entry without the page needing to reload every time. This was probably the hardest part, as I had to learn jQuery and the concept of AJAX at the same time. However, it worked out and the user experience was much better than before.

I really liked the calorie tracker project, because I learned so many things that would be useful in later real-life projects. Also, it was nice to see that the calorie tracker developed along with my programming skills. I implemented the easiest features in the beginning and they became much more fancy, as I learned new concepts and technologies. I also had a small insight of how a real project would work. I had to deal with the customer not yet being sure about what he really wants and thus, with changing requirements. It was a great way to develop my programming skills.

I want to thank Tobias for his great guidance to smooth operating, clean code and coding methodology. I learned so much in this year that I will be able to use for my whole programming career now. But it was not only Tobias who was helpful to me during my internship. Actually, everybody at Simplificator was always happy to help me with questions and giving me guidance in everything they could. I am still so happy to have had the opportunity for this internship, even though such a position didn’t actually exist. This is exactly how I perceived Simplificator from the beginning: People are always open for new ideas, even from outside people like me, at that time.

Since September, I have been working at Simplificator as a full-time Junior Software Developer, and I am working on much more challenging projects now. It is always interesting and I am still learning every day. Just like I always wanted :-)

 

EuRuKo Sofia 2016: My first Ruby Conference

I was really excited to go to EuRuKo 2016, because it was the first time for me to attend an event such as this. The conference was on Friday and on Saturday, and we arrived on Thursday morning to discover Sofia. As it turned out, the city is quite small, so a few hours were enough to do so. We then ran into some other Swiss guys who attended the conference (Pascal among them) and when they were talking about past conferences, I really couldn’t wait anymore for the next day.

The EuRuKo started with the key note of Matz. He talked about the Ruby community and about how Ruby is designed to make programmers happy. He wants to keep the core features from Ruby, while at the same time keeping up with the development of the technologies and the needs of the programmers. “I don’t want the Ruby community to be dead. I want it to keep moving forward!”, he said. He also spent some time talking about Ruby 3 and its incredible new features (like partial type inference, called “duck typing”), and then finished off by saying that we will not get it for some years :-).

matz_euruko_2016

Another talk that I really liked was “Little Snippets” from Xavier Noria. He showed real code examples that are often used in practice, and their much simpler and more readable counterparts. This was especially great for me as a junior developer, because I didn’t know about some of these easier way to write code. When seeing it, it makes total sense. For example, he mentioned the order of code snippets really matters. If you write it the same order that your brain logically conceives it, another person can read it in one flow, and will understand it right away. Here is an example:

attr_reader :deleted_at
attr_reader :created_at
attr_reader :updated_at

This order doesn’t really make sense, if you think about the natural flow of a project. Normally, you first create an instance of a class. Later you might update it, and finally, you might delete it. Therefore, this code snippet should really look like this:

attr_reader :created_at
attr_reader :updated_at
attr_reader :deleted_at

You might say this is a detail. But we should write code the way our brains can conceive it as natural. This also makes it easier if another person has to work on or maintain the code we wrote.

The official party of the conference was on Friday, and it was absolutely great. I met so many new people from different countries and everybody was so nice. I then understood what many people have told me before: the Ruby community is an exceptional one. Later, there was a vote about where the EuRuKo should be hosted in 2017. It was a close call between Rome and Budapest. In the end, Budapest got the most votes. So I will of course be there in 2017.

On Saturday, there was a talk about “The consequences of an Insightful Algorithm” by Carina C. Zona that really touched me. She talked about the consequences an algorithm can have on a person’s life. One example was a story about a large online shop that sent promotion mail about pregnancy products to a young woman. Her father was shocked and called the online shop, complaining that they sent mail like this to his daughter. Some days later, he called again, apologising to them, because his daughter was indeed pregnant. Carina discussed about the difficulty of how far data collection should go and to which extent it is morally defensible to use it in order to make profit. Her talk can be viewed at this link: Carina C. Zona: Consequences of an Insightful Algorithm | JSConf EU 2015

ruby_together

There was another talk on Saturday by André Arko that made me think a lot. He was talking about Ruby Together and how their last year was. At Ruby Together, they maintain the Ruby infrastructure, such as Bundler and RubyGems. André said that they lack volunteers to help working on the maintenance, as well as funds to pay professional developers to do so. He told the following story: the RubyGems site was down and a lot of developers contacted him, saying that they would be willing to help getting it up again. Once everything was back to normal, he contacted these people, asking them if they would be willing to help with the general maintenance. Zero of these people agreed to do so. This was really surprising to me. We all need the technologies to work, but I guess also in our free time. People tend to take these things for granted. However, the infrastructure doesn’t maintain itself. I knew that Simplificator already supports Ruby Together, and I then decided to do so too, as a private person. It costs $40 each month and I think that is not too much, considering that I use these technologies every day. Please consider making a contribution too!

The two conference days were over pretty quickly, and I am very happy to have met so many new and interesting people. I am already looking forward to the RubyDay in Florence at the end of November. And of course, I will also be at EuRuKo 2017 in Budapest next year.