
Scalability comes home to Work-books with AWS and Elastic…
The last couple of weeks have been a pretty hard slog. Trying to interview and build a team through the medium of webchats, as well as the current employment market climate, has been, challenging…
One of the key hires will always be finding our Senior Dev, and we are looking for someone with a little Dev Ops in the mix.
However our futile searches left us with an issue, the codebase and product must still progress, and one of the key issues we faced was scalability, in the move from POC (Proof of concept) to MVP (Minimum viable product) there’s a LOT of consideration and foundation work to be put into the code and configuration to allow a product to scale.
Importantly we had to learn how to spin up a server on demand and make the server configuration work out of the box, so to speak, usually we tweak server configs, add apps and extensions and lots of config files, anyone who has tried to set up an Ubuntu server just to share a web page on the internet will understand this pain.
Our new architecture in all its glory…

There have been several key developments in our recent architecture move,
- Move from a single server providing application and data storage collectively.
- In the first step we split off the data storage and the application into separate servers, this allows us to configure servers specifically towards the 2 different tasks. One funnel; tuned for processing and computational power, the other tuned for reading and writing data.
- We already ran a separate Server configured specifically for the PBX and video chat functionality, however, these have now all been all brought under the same regional endpoint to the AWS cloud.
- We have added 2 load balancers, one for the application environment, one for the video chat environment.
- These secure points monitor the server array behind them, checking client requests, response times, and behaviors of the stack. If we are busy we spin up some more servers to meet the demand, if were quiet then these elegantly shut down.
- We have an internal balancer that monitors and provides the computational capacity for the video chat services. Which by the nature of the volume of information processed has a high computational overhead.
- On the datastore, we have added a read-only replication so that we can again spread the transactional demands across multiple servers.
- Leaving one server to handle all the write commands of data going in whilst allowing control and capacity to meet the demands of reading access from the client base.
- Finally, we secured the front with a secure socket and a firewall around the entire service stack. Keeping and monitoring the safety and privacy of the data.
All of these actions and investment into our infrastructure ensure we can meet any capacity demands of the future, efficiently, automatically, and most importantly cost-effectively as we move towards rolling out our Beta Launch.
Steven Moffat. Founder