Constructing Amatino's Alpha Infrastructure


#1

Originally published at: https://amatino.io/blog/constructing-amatinos-alpha-infrastructure/

The first working version of Amatino was a pretty simple piece of software. It ran on a single server, and could be utilised via HTTP requests or a bare-bones MacOS Cocoa application.

Amatino “V1” was born in November 2016. It was the product of about two years of research into accounting data-structures. As soon as I finished V1, I realised it was nowhere near being a useful service for a wider audience.

Useful services are highly available, respond with low latency, process requests quickly, and are shielded by strong security. Sure, I could throw more cores and RAM at V1, but the reality is that to provide those aforementioned attributes, a service needs to scale out across an arbitrary number of machines.

Amatino “V2” takes the V1 core R&D and turns the volume up to 11. Amatino’s V2 is to V1 as SpaceX’s Falcon 9 is to Falcon 1. Falcon 1 proved concepts, but Falcon 9 provides a useful service.

In V2, concerns that could be combined on single machines are separated: For example, database machines don’t run caches, and servers providing the website don’t process core API requests. Whereas V1 could run on a single server, V2, at a minimum, requires a constellation of six. And, to be properly tested, it should be run across thirteen.

Spinning up six or more machines on a cloud provider would cost a pretty penny. I don’t know if you are going to like Amatino, or how long it will take before someone might generously choose to subscribe. It would not be prudent to spend big-bucks on hosting at this stage.

Instead, I repurposed hardware from an old gaming rig. The V2 Alpha runs across thirteen virtual machines on an Intel 5820K CPU, with 32GB of RAM, backed by a Samsung 960 Pro NVMe disk, connected to you by a consumer-grade fibre line.

Hardly the hardware a production-grade service should be running on, but perfect for a minimum-viable product.

I call the machine ‘Starport’. The worst part about running the Alpha on Starport is that is located in Sydney, Australia, which is the ass-end of the universe when it comes to network latency. Everyone that connects to Amatino is going to get a ping of one gajillion.

In some ways, that’s good: It will force the software to operate under worst-case latency conditions. If Amatino can provide good service at 400ms, then it should provide a great service at 20ms.

20ms really is the final goal. V2 has been designed to spin up global branches in seconds. Each branch then serves all requests in closest geographic proximity, while retaining global data consistency. Over the coming months, I want to spin up instances in proximity to interested users (hey, maybe that’s you!), and gradually move all processing off Starport.

If you would be interested in testing an Amatino branch near you, let me know your whereabouts in the discussion forums. I’d love to spin one up for you so you can give me feedback!

- Hugh


#2

Update: I’ve gone ahead and moved the Alpha deployment off Starport and into AWS. It’s costing me an arm and a leg, but it’s allowing me to test in a more realistic deployment environment.

Side effect: I can now deploy Amatino to any AWS region (or, for that matter, any other hosting provider), with a few commands. So if you want Amatino deployed near you, just hit me up (@hugh_jeremy / hugh@amatino.io).