Parisians is a cute simulation of Paris with autonomous pedestrians. It’s simple to build and easy to personalize, a Single Board Computer "wall piece": on display and looping forever.
Citizens walk around the stylised streets of Paris. The Citizens follow the roads, enter buildings to rest and explore independently. Create your own civilians and follow them across the half-a-billion-pixel map all powered by just a simple Raspberry Pi and a touch screen.
I wanted autonomous pedestrians with a walk that looks "intentional"; the illusion of "looking like they had somewhere to be". Hard coding in a finite set of predefined routes or using a hamiltonian circuit would eventually reveal a pattern to the audience, appearing robotic and breaking the illusion that the pedestrians had a choice.
I started working on Parisians in Feb 2022. It was an engagement gift for my brother and I figured it would only take me a few weeks. When August came around I quickly shipped out what I could in time for the wedding. The original simulation was based in Sydney. It had his friends and some of his favourite fictional characters roaming around his neighbourhood. Now it’s a more generic, but very beautiful map of Paris.
The challenge I experienced along the way was memory management. To keep this project affordable we want this to run with less than 2 GB of RAM. That 2 GB has to keep track of all the positions and velocities of every pedestrian, every map asset, and the buffer that draws the image on the screen. The large number of agents making independent decisions forces us to be efficient with our calculations.
Hardware Limitations: Coding for an SBC.
Our first complication is the limitations of the language. To create the map, we want to use this excellent library by marceloprates which is written in Python. This makes Python and the Pygame library an obvious choice. Unfortunately, in terms of fast programming languages Python and Pygame are a few colours short of a rainbow.
This would usually be fine since modern computers are rocket-fast, and language choice is rarely a bottleneck. However, a raspberry pi 4, impressive though it is for its low price, is still just an SBC and is not going to cope with this much complexity if we aren't careful.
The first thing I tried was using all four cores on the Raspberry Pi. As you may know, the Raspberry Pi Model 4 is a quad-core processor and maybe we can thread out our required calculations between all processors. I threw each autonomous pedestrian into its own thread and broke out the multiprocessor library... it didn’t work. Let me introduce you to the villain of our project…
One of the problems software developers regularly face is race conditions. To oversimplify, race conditions are when two “things'' try to modify the same data at the same time. In our case, the two things are four things, and the four things are the raspberry pi’s four cores. If two or more cores try to modify the position of a pedestrian at the same time, they will probably pick two different positions and we won’t know which to pick. Python has an extremely brute-force solution to this, the Global Interpreter Lock. The GIL essentially says “No Sharing!”. In Python, no two things can share the same data.
Python’s in-built multiprocessing library gets around the GIL by using pickle. This means that an entire “job”, with its own data and workers, is encapsulated (pickled) into a binary. Then it is run in a completely self-contained runtime environment with its own Python interpreter. Very cool! But I’m sure you can imagine how angry an SBC can get when you try to spin up 4 completely separate instances of Pygame and four 23000 by 23000 pixel buffers. Needless to say, the frame rate was dismal.
Python can be fast!
Now that I knew I was going to be stuck on a single core I needed the software to be performative. To do this we can extensively use the famous Python library NumPy.
The NumPy library provides a data structure that can handle large matrices. It also comes with fast mathematical functions to manipulate matrices. For our purposes we want NumPy to store the pixel data of our large images.
NumPy computations are fast because NumPy is actually C code under the hood, and The C Language is a “very speedy boi”™. When we use NumPy, we are actually leaving the Python Ecosystem and computing closer to the metal.
Another data structure in Python that is fast under the hood is Python’s tuple object. The tuple is an API to CPython and lookup times are extremely quick.
By using NumPy for all of our matrix transformations (movement) and tuples to store our vectors (positions) we receive big speed boosts.
The Map is TOO BIG!
23000 pixels by 23000 pixels is a lot of data to store in RAM. The Map of Paris struggles to open on my fast desktop computer. If you try to open up a file of that size on the Raspberry PI? Surprise! it crashes the PI. To resolve this, we are going to need to do three things:
- Be clever about how we read the data into memory.
- Cut the maps into bits and only read what we need to display.
- Avoid rendering the map when we don’t need to.
Pickling the Map.
The Pythonic way of reading and rendering images is using the Pillow library, an excellent image processing library. Unfortunately loading an image into memory with Pillow immediately bricked the SBC. Another thing I tried was Pygames' Sprite system, but this turned out to use Pillow under the hood and put me right back where I started. I needed a solution closer to the metal.
The magic that resurrected this project out of its dark age was Pickle which is used for converting data structures into vanilla binary files. For technical reasons, which I won’t go into, if we take our map of Paris and dump its pixel data into a vanilla binary array using Pickle, a Raspberry PI 4 will be able to unpickle it into RAM and then convert it into a 2D NumPy array. A NumPy array is exactly what we want. (I’ve written Python scripts that do this for you).
Tiling the Map.
With Pickle, we are able to store the data in memory, but rendering all 529 million pixels on screen at 24fps is not going to be possible. To solve this, we only store and render the section of the map that the user is looking at. By splitting the map into indexed tiles we can check where the in-game camera is pointing, find what part of Paris we are looking at, and only draw those tiles of Paris to the screen.
Render While We Wait!
Remember when I said that NumPy is really C code with some fancy Pythonic makeup? The funny thing about C Code is that it is not married to the Python interpreter and therefore isn’t bound by Python's GIL. This means that while we are running C Code, the Python interpreter, and the processor it was using, is just waiting.
We want to render the map to the screen while we are running our calculations in NumPy. How can we do that?
The simple Pythonic solution to that is just to spin up a new thread every time we want to render a tile. A thread is just a list of instructions for the CPU to follow.
The thread is still interpreted by Python and so will require the GIL however the GIL will be available whenever we are running non-python calculations like moving characters in NumPy or updating positions in tuples. These precious microseconds stack up fast and represent the difference between 16fps and 24fps.
Just Buy a Bigger Computer?
Of course, there are other optimisations I haven’t discussed, and optimisations I’d love to make. You can of course just buy a more powerful machine. But powerful often means bigger, and the Raspberry PI is affordable and common. Lots of touch screens are made to be compatible with the Raspberry Pi GPIO pins which makes the other aspects of the build easier.
The first thing we need is a working Raspberry Pi. For simplicity, I will assume that you are rocking a standard copy of Raspberry PI OS (Desktop).
The first thing we want to do is connect our monitor. For this project, I used a Waveshare HDMI monitor with resistive touch.
- Line up and connect the Raspberry Pi GPIO pins to the monitors. Raspberry Pi leads out 40 GPIO pins, while the screen leads out 26 pins. When connecting, pay attention to the corresponding pins and Raspberry Pi pins.
- Connect the HDMI connector to the HDMI port of the screen and the Pi.
Next, we need to get the monitor to talk to PI. To do this we will need to configure the PI boot sequence.
- Find the USB or MicroSD card that contains your copy of Raspberry PI OS.
- Insert it into your computer.
- Navigate to the drive that contains the OS.
- Open the config.txt file in the root directory of the drive.
- Add the following to the end of the config.txt
If you insert that media with Raspberry PI OS into your PI connected to the Waveshare monitor you should see the Monitor come alive with a Linux Desktop and basic touch controls. You may need to calibrate the touch controls. To calibrate I found this tutorial helpful.
``` hdmi_group=2 hdmi_mode=87 hdmi_cvt 1024 600 60 6 0 0 0 dtoverlay=ads7846,cs=1,penirq=25,penirq_pull=2,speed=50000,keep_vref_on=0,swapxy=0,pmax=255,xohms=150,xmin=200,xmax=3900,ymin=200,ymax=3900 ```
Make it Pretty.
If you would like a wood look there are some simple things you can do.
A quick and dirty solution is some moulded picture frame rails.
The Waveshare monitor comes with a set of screws that suit the four holes at its corners.
You can use these screws to fasten the rails to the monitor for a simple and elegant look.
I’ve attached some drawings and plans if you're into that, but you may also just be able to wing it with a pencil, ruler, saw, and drill.
If you're just looking to get the project running, I've provided a preconfigured copy of the software via Google Drive.
If you intend on customizing the code for your project you will need the software provided on GitHub. The repo does not come with all dependencies you will need, but scripts are provided to create them and there are detailed instructions in the README.
```bash git clone https://github.com/pixmusix/Parisians.git ```
Instructions on how to customise the simulation (like changing location, pedestrians, colours, speed, etc), can be found in the README.
Once you have a copy of the software on your PC transfer it to your Raspberry Pi via USB (or similar).
Configuring Python Libraries on your Raspberry Pi.
We will need to install some Python libraries.
Thankfully, Raspberry PI OS is pre-installed with Python3 and Pip3.
To install the libraries we need you can follow the GitHub repository README.
Alternatively, you can run the script below.
```bash pip install pygame==2.1.2 pip install pillow==8.3.1 pip install numpy==1.23.5 pip install matplotlib==3.6.2 pip install prettymaps==0.1.3 pip install opencv-python==220.127.116.11 pip install osmnx==1.2.2 ```
Launching Simulation on Reboot.
We want this simulation to automatically launch when we power on the PI. There are lots of ways to do this, but I had the most success placing the following code in my .bashrc file.
```bash sleep 5 cd /absolute/path/to/parisians python3 main.py ```
Then I ensured the terminal booted on launch.
```bash cd home/pi/.config/lxsession/LXDE-pi/ echo "@lxterminal" >> autostart reboot ```
Having the terminal open appeared to be "necessary" to make the hotkeys trigger. I also found that this method ensured that wifi and other auxiliaries would be operational before the loop would run.