This Case Study is a guest post written by Titouan Galopin,
lead engineer and product lead at EnMarche project.
Please note that this is a strictly technical article; any political comment
will be automatically deleted.
Want your company featured on the official Symfony blog? Send a proposal or case
study to fabien.potencier@sensiolabs.com
Project Background
In April 2016, Emmanuel Macron,
now President of France, created a political movement
called "En Marche!" ("On the Move" in English), initially as a door-to-door
operation to ask the public what was wrong with France.
Unlike established political parties, En Marche! didn't have any infrastructure,
budget or members to support its cause. That's why En Marche! relied on the
power of Internet since its very beginning to find supporters, promote events
and collect donations.
I started to work for En Marche! as a volunteer in October 2016. The team was
small and the all of the IT operations were maintained by just one person. So
they gladly accepted my proposal to help them. At that time, the platform was
created with WordPress, but we needed to replace it with something that allowed
faster and more customized development. The choice of Symfony was natural: it
fits the project size well, I have experience with it and it scales easily to
handle the large number of users we have.
Architecture overview
Scalability was the top priority of the project, especially after the issues
they faced with the first version of the platform that wasn't built with
Symfony. The following diagram shows an overview of the project architecture,
which is extremely scalable and redundant where needed:
![]()
We use Google Container Engine and Kubernetes to
provide scalability, rolling updates and load balancing.
The Symfony app is built from the ground as a Dockerized application. The
configuration uses environment variables and the application is read-only to
keep it scalable: we don't generate any files at run-time in the container. The
application cache is generated when building the Docker image and then is
synchronized amongst the servers using the Symfony Cache component
combined with a Redis instance.
There are two workers, managed by RabbitMQ, to process some heavy operations in
the background: sending emails (sometimes we have to send 45k emails in a single
request) and building the serialized JSON users lists that are used by several
parts of the application to avoid dealing with slow and complex SQL queries.
The database uses Google Cloud SQL, a centralized MySQL database we don’t have
to manage. To connect to it, we use the
Cloud SQL proxy Docker image.
Deployment
The project uses a Continuous Delivery strategy, which is different from the
Continuous Deployment approach: each commit is automatically deployed on a
staging server but the production deployment is manual. Google Container Engine
and Kubernetes are the key components to our deployment flow.
The Continuous Delivery process, as well as the unit and functional tests, is
handled by CircleCI. We also use StyleCI (to ensure that new code matches the
coding style of the rest of the project) and SensioLabsInsight (to perform
automatic code quality analyses). These three services are configured as checks
that each Pull Request must pass before merging it.
When a Pull Request is merged, the Continuous Delivery process starts (see theconfiguration file):
- Authenticate on Google Cloud using a Circle CI environment variable.
- Build the Javascript files for production.
- Build the three Docker images of the project (app, mails worker, users lists worker).
- Push the built images to Google Container Registry.
- Use the
kubectl
command line tool to update the staging server (a rolling update).
The only process performed manually (on purpose) is the SQL migration. Even if
that can be automated, we prefer to carefully review those migrations before
applying them to prevent serious errors on production.
Front-end
The application front-end doesn't follow the single-page application pattern. In
fact, we wanted to use the least amount of Javascript possible to improve
performance and rely on the native browser features.
React + Webpack
The JavaScript code of the application is implemented using React compiled with
Webpack. We don't use Redux - or even React-Router - but pure React code, and we
load the components only in specific containers on the page, instead of building
the whole page with them. This is useful for two reasons:
- The HTML content is fully rendered before React is loaded, and then React
modifies the page contents as needed. This makes the application usable
without JavaScript, even when the page is still loading on slow networks. This
technique is called "progressive enhancement" and it dramatically improves the
perceived performance.
- We use Webpack 2 with tree shaking and chunks loading, so the components of
each page are only loaded when necessary and therefore do not bloat the
minified application code.
This technique lead us to organize the front-end code as following:
- A front/ directory
at the root of the application stores all the SASS and JavaScript files.
- A tiny kernel.js
file loads the JavaScript vendors and application code in parallel.
- An app.js
file loads the different application components.
- In the Twig templates, we load the components needed for each page (for example,
the address autocomplete component.
Front-end performance
Front-end performance is often overlooked, but the network is usually the
biggest bottleneck of your application. Saving a few milliseconds in the backend
won't take you too far, but saving 3 or more seconds of loading time for the
images will change the perception of your web site.
Images were the main front-end performance issue. Campaign managers wanted to
publish lots of images, but the users want fast-loading pages. The solution was
to use powerful compression algorithms and apply other tricks.
First, we stored the image contents on Google Cloud Storage and their metadata
in the database (using a Doctrine entity called Media).
This allows us, for example, to know the image dimensions without needing to
load it. This helps us creating a web page design that doesn't jump around while
images load.
Second, we combined the Media entity date with the Glide library to implement:
- Contextual image resizing: for example, the images displayed on the small
grid blocks in the homepage can be much smaller and of lower resolutions
than the same images displayed as the main article image.
- Better image compression: all images are encoded as progressive jpegs with
a quality of 70%. This change improved the loading time dramatically compared
to other formats such as PNG.
The integration of Glide into Symfony was made with a simple endpoint in theAssetController
and we used signatures and the cache to mitigate DDoS attacks on this endpoint.
Third, we lazy loaded all images below the scroll, which consists in three steps:
- Load all the elements above the scroll as fast as possible, and wait for the
ones below it.
- Load ultra low resolution versions of the images below the scroll (generated
with Glide) and use local JavaScript code to apply a gaussian blur filter to them.
- Replace these blurred placeholders when the high quality images are loaded.
We implemented an application wide Javascript listener
to apply this behavior everywhere on the web site.
![]()
Forms
The project includes some interesting forms. The first one is theform to sign up for the web: depending upon
the country and postal code fields, the city field changes from an input text to
a prepopulated select list.
Technically there are two fields: “cityName” and “city” (the second one is the
code assigned to the city according to the French regulations). The Form
component populates these two fields from the request, as usual.
On the view side, only the cityName field is displayed initially. If the
selected country is France, we use some JavaScript code to show the select list
of cities. This JavaScript code also listens to the change event of the postal
code field and makes an AJAX request to get the list of related cities. On the
server side, if the selected country is France, we require a city code to be
provided and otherwise we use the cityName field.
This technique is a good example of the progressive enhancement technique
discussed a bit earlier in this article. The JavaScript code, as everything
else, is just a helper to make some things nicer, but it's not critical to make
the feature work.
As these of address fields are used a lot in the application, we abstracted it
to an AddressType form type
associated with an address Javascript component.
The other interesting form is the
one that lets you send an email to someone trying to convince them to vote for
the candidate. It's a multi-step form that asks some questions about that other
person (gender, age, job type, topics of interest, etc.) and then generates
customized content that can be sent by email.
Technically the form combines a highly dynamic Symfony Form with the Workflow
component, which is a good example of how to integrate both. The implementation
is based on a model class called InvitationProcessor
populated from a multi-step and dynamic form type
and storing the contents in the session. The Workflow component was used to
ensure that the model object is valid, defining which transitions were allowed
for each model state: see InvitationProcessorHandler
and workflows.yml config.
Search Engine
The search engine, which is blazing fast and provides real-time search results,
is powered by Algolia. The integration to index the
application entities (articles, pages, committees, events, etc.) is made with
the AlgoliaSearchBundle.
This bundle is really useful. We just added a few annotations to the Doctrine
entities and after that, the search index was automatically updated whenever an
entity is created, updated or deleted. Technically, the bundle listens to the
Doctrine events, so you don't need to do anything to keep the search contents
always updated.
Security
As any other high-profile web site, we were the target of some attacks
coordinated and carried out by powerful organizations. Most of the attacks were
of brute-force nature and the aim was to take the web site down rather than
infiltrate it.
The web site was targeted by DDoS attacks
eight times in the whole campaign, five of them in the final two weeks. They had
no impact on the Symfony app because of the Cloudflare mitigation and our
on-demand scalability based on Kubernetes.
![]()
First, we suffered three attacks based on WordPress pingbacks. The attackers
used thousands of hacked WordPress websites to send pingback requests to our
website, quickly overloading it. We addedsome checks
in the nginx configuration to mitigate this attack.
The other attacks were more sophisticated and required both Cloudflare and
Varnish to mitigate them. Using Cloudfare to cache assets was so efficient that
we thought there was no need for a reverse proxy. However, a reverse proxy was
proven necessary during DDoS attacks: in the last days of the campaign, the
attacks were huge (up to 300,000 requests per second) and we had to disable the
user system and enable the "Cache Everything" flag on Cloudfare.
There's nothing you can do to prevent security attacks, but you can mitigate
them by complying with the best practices of Symfony, which, by the way, is one
of the few open source projects that hasconducted a public security audit.
Open Source
The en-marche.fr web platform and
its related projects have been open sourced in the@EnMarche GitHub account. We didn't promote this
idea much though, because open source is pretty complex to explain to non-technical
people. However, we received some contributions from people that found the
project and were glad that it was open source.
We are also thinking about giving back to Symfony by contributing some elements
developed for the project. For example, theUnitedNationsCountryType form type
could be useful for some projects. We also developed an integration with the Mailjet
service that could be released as a Symfony bundle.