EXTENDED CV
A highly motivated and adaptable Software Developer, I'm able to quickly
learn new skills in high pressure environments. In all my professional projects
so far I've taken complete ownership of my work, from design to deployment,
without any supervision or hand holding. I focus on writing clean, well tested
code that is easy to maintain and extend, and prefer to spend time prototyping
rather than planning.
If you have any further questions about anything written here, you can reach me at
[email protected].
Experience
Software Development Engineer
|
AWS
Oct 2024 - Present
// Java, Python, Networking, Infrastructure
Part of a team within NAE (Network Availability Engineering) that focuses on shifting traffic between devices for internal services.
I've only been here a couple of weeks so it's a bit too soon to be writing that much about my experience here.
Graduate Software Developer
|
Techex
Jul 2024 - Oct 2024
// Rust, Audio, Video, Codecs
Rust developer at a well established live broadcast technology company, most involved with development of their modular txdarwin product.
Techex's about page says "We've been in the live broadcast industry since 1972, and while you may not have heard of us, you will definitely have seen our work". Before I applied for the job, I'd definitely never heard of them, but once I began working there I realised they're very good at what they do and incredibly well respected within the industry (and for good reason). Their customers include BBC, Sky, WB, BT, and Comcast.
The development team was small, but agile. From the very first week I was given the freedom to dive straight into new features with full ownership and autonomy, an experience for which I am incredibly grateful. Despite my brief time here, I got the opportunity to work on a wide range of interesting projects (one of which was even used in the 2024 Olympics).
90% of my time was spent on pure Rust projects, although I did also make some minor contributions to their frontend codebase (React/Typescript) when necessary.
I'd have quite happily stayed here for a very long time. However, AWS finally got back to me with an offer for a role I'd applied for almost 4 months prior to starting at Techex, and I felt I couldn't pass up the opportunity.
The development team was small, but agile. From the very first week I was given the freedom to dive straight into new features with full ownership and autonomy, an experience for which I am incredibly grateful. Despite my brief time here, I got the opportunity to work on a wide range of interesting projects (one of which was even used in the 2024 Olympics).
90% of my time was spent on pure Rust projects, although I did also make some minor contributions to their frontend codebase (React/Typescript) when necessary.
I'd have quite happily stayed here for a very long time. However, AWS finally got back to me with an offer for a role I'd applied for almost 4 months prior to starting at Techex, and I felt I couldn't pass up the opportunity.
Backend Developer
|
AirDeveloppa
2023 - 2024
// Rust, actix-web, MongoDB, Redis, AWS, DigitalOcean, Docker, Stripe, Zebedee, otel
Rebuilt the company's existing backend infrastructure and implemented new features required for launch.
Based in Chiang Mai, a city notorious for its annual "smoky season", AirDeveloppa's online platform provides a way for people to monitor the indoor air quality of various local businesses in real time. One of the extra features I was hired to implement was a "check-in" system, which uses micropayments to incentivise potential customers to visit businesses and verify that the AQI monitoring devices are correctly positioned. I was also hired to implement all the backend functionality required for businesses to purchase credits and manage their micropayments to customers (daily budget, payment per check-in, etc).
Their existing backend was a Node.js (Express) monolith, with several thousand lines of code in one file and no real tests, documentation or CI/CD. After some very basic load testing, it was clear that the existing infrastructure would not be able to handle the expected traffic. At this point I decided it would make more sense to completely rebuild the backend from scratch, rather than trying to refactor the existing codebase.
My primary reason for choosing Rust was the fact it's the language I'm able to move fastest with, but it also had the added benefits of high performance, safety and reliability. The end result was a much more robust and scalable system, with a fully documented API, a suite of automated tests and a CI/CD pipeline for automated deployment to production and staging environments. OpenTelemetry and NewRelic were used for observability/tracing. I also ensured that the new backend was fully backwards compatible with the previous API to avoid any breaking changes for the frontend team.
Stripe was used for all business-to-business payments, and Zebedee was used to handle all micropayments to customers (these micropayments had to be made in Bitcoin due to high transaction fees in Thailand).
As this was a remote position, I made sure to keep the client updated on my progress, leaving a trail of GitHub issues and pull requests for any future developers to follow.
Their existing backend was a Node.js (Express) monolith, with several thousand lines of code in one file and no real tests, documentation or CI/CD. After some very basic load testing, it was clear that the existing infrastructure would not be able to handle the expected traffic. At this point I decided it would make more sense to completely rebuild the backend from scratch, rather than trying to refactor the existing codebase.
My primary reason for choosing Rust was the fact it's the language I'm able to move fastest with, but it also had the added benefits of high performance, safety and reliability. The end result was a much more robust and scalable system, with a fully documented API, a suite of automated tests and a CI/CD pipeline for automated deployment to production and staging environments. OpenTelemetry and NewRelic were used for observability/tracing. I also ensured that the new backend was fully backwards compatible with the previous API to avoid any breaking changes for the frontend team.
Stripe was used for all business-to-business payments, and Zebedee was used to handle all micropayments to customers (these micropayments had to be made in Bitcoin due to high transaction fees in Thailand).
As this was a remote position, I made sure to keep the client updated on my progress, leaving a trail of GitHub issues and pull requests for any future developers to follow.
Blockchain Developer
|
Freelance
2021 - 2022
// Solidity, Rust, Python, Typescript, Ethereum, Solana, Docker, PostgreSQL, Redis
Hired mostly to write smart contracts for various projects on Ethereum, and eventually Solana.
This began as a part-time endeavour, but led to me taking a year off University to work full time on further projects. I was initially asked if I would be able to write the smart contract for a fairly simple NFT project, and despite having no prior experience with Solidity or smart contracts, I accepted the offer because I knew I'd be able to figure it out.
The next project was much more complex and had to be built on the Solana blockchain, which is when I decided to take a break from University. I had no experience with Rust, but the biggest challenge was actually learning the massive differences in Solana's architecture compared to Ethereum. I was the only blockchain developer on the project, and documentation for Solana was rather limited at the time, so I had to learn everything from scratch while dealing with the pressures of a tight deadline, a large amount of money at stake and a community of investors eagerly awaiting the launch.
A source of on-chain verifiable randomness was required, and Solana didn't have anything like ChainLink's VRF oracles at the time, so I had to implement my own solution. I ended up writing a separate Solana program which worked by using price feeds from on-chain oracles as a source of randomness. It would then take the least significant digits from multiple price assets, combining them into a single number before hashing it with the current timestamp, the caller's account ID, and some other values unique to each transaction to produce a large random number.
Obviously, this process had to be separated into two steps to prevent anyone from predicting the next random number, so I split it into a request step which would be called first, followed by a reveal step which could only be called after the next block had been produced. After extensive local testing and analysis I determined that this method produced numbers with a similar distribution to the output of a standard PRNG, and the only way to predict its output would be to control the price feeds used as input or to predict the prices of multiple crypto assets at the smallest possible precision (anyone able to do this would make much more money by trading crypto than by trying to exploit this system).
Overall, This project was the most intense and stressful experience of my life. I was working 12+ hours a day, 7 days a week, for several months straight. After finally launching the project (which was preceded by a 30+ hour session of development and constant debugging, without sleep) I woke up the next day, after only a few hours sleep, to a message from our community manager asking if I could fix a very minor bug that had been reported. At this point I should have realised I needed a day off to recover, but I instead sat down and started working on the bug. In my sleep deprived state I pushed a hotfix which resolved the bug, but the change affected the on-chain state of the Solana program in a way which allowed some users to mint an absurd amount of tokens.
The liquidity pool was drained within minutes and I had to figure out how to fix the issue, without guidance or supervision from anyone more experienced than myself. In the end I was able to fix the issue, and luckily a few diligent members of our team were able to track down and recover the lost funds (~$60k), but this served as a valuable lesson for me and it's something I'll never forget. Since that incident I focus much more on ensuring everything is thoroughly tested before deploying to production, and I don't let others rush me into making changes I'm not 100% confident in.
Compared to this project, the rest of the projects I worked on were a breeze. Although I was still the only blockchain developer on future projects, I had the opportunity to work with some highly experienced developers in other areas, which was a great learning experience.
The next project was much more complex and had to be built on the Solana blockchain, which is when I decided to take a break from University. I had no experience with Rust, but the biggest challenge was actually learning the massive differences in Solana's architecture compared to Ethereum. I was the only blockchain developer on the project, and documentation for Solana was rather limited at the time, so I had to learn everything from scratch while dealing with the pressures of a tight deadline, a large amount of money at stake and a community of investors eagerly awaiting the launch.
A source of on-chain verifiable randomness was required, and Solana didn't have anything like ChainLink's VRF oracles at the time, so I had to implement my own solution. I ended up writing a separate Solana program which worked by using price feeds from on-chain oracles as a source of randomness. It would then take the least significant digits from multiple price assets, combining them into a single number before hashing it with the current timestamp, the caller's account ID, and some other values unique to each transaction to produce a large random number.
Obviously, this process had to be separated into two steps to prevent anyone from predicting the next random number, so I split it into a request step which would be called first, followed by a reveal step which could only be called after the next block had been produced. After extensive local testing and analysis I determined that this method produced numbers with a similar distribution to the output of a standard PRNG, and the only way to predict its output would be to control the price feeds used as input or to predict the prices of multiple crypto assets at the smallest possible precision (anyone able to do this would make much more money by trading crypto than by trying to exploit this system).
Overall, This project was the most intense and stressful experience of my life. I was working 12+ hours a day, 7 days a week, for several months straight. After finally launching the project (which was preceded by a 30+ hour session of development and constant debugging, without sleep) I woke up the next day, after only a few hours sleep, to a message from our community manager asking if I could fix a very minor bug that had been reported. At this point I should have realised I needed a day off to recover, but I instead sat down and started working on the bug. In my sleep deprived state I pushed a hotfix which resolved the bug, but the change affected the on-chain state of the Solana program in a way which allowed some users to mint an absurd amount of tokens.
The liquidity pool was drained within minutes and I had to figure out how to fix the issue, without guidance or supervision from anyone more experienced than myself. In the end I was able to fix the issue, and luckily a few diligent members of our team were able to track down and recover the lost funds (~$60k), but this served as a valuable lesson for me and it's something I'll never forget. Since that incident I focus much more on ensuring everything is thoroughly tested before deploying to production, and I don't let others rush me into making changes I'm not 100% confident in.
Compared to this project, the rest of the projects I worked on were a breeze. Although I was still the only blockchain developer on future projects, I had the opportunity to work with some highly experienced developers in other areas, which was a great learning experience.
Project/QA Technician
|
Pharmagraph
2016 - 2019
// Engineering, Electronics, Project Planning, Quality Assurance, Testing, CAD
Design, testing and installation of a range of environmental monitoring systems, primarily for the pharmaceutical industry.
I began working at Pharmagraph as a Project Technician, a role which mostly involved assisting Project Engineers with the overall design of the systems for each new project. Eventually I was able to take on more responsibility and had the opportunity to work on certain aspects of several projects by myself. Every project was an inter-disciplinary effort, requiring effective communication between the various teams involved and with the customer themselves.
After a couple of years I was promoted to a Quality Assurance role, which mostly involved testing and inspecting the systems before they were shipped to the customer. Luckily, I was still sometimes able to go overseas to perform the installation and commissioning of the systems. I was often expected to go as the only representative of the company, so I had to be able to work independently and solve problems on my own (especially when timezone differences meant I couldn't contact anyone back at the office), while also effectively communicating with the customer and ensuring they were fully satisfied with everything.
I gradually developed more of an interest in the software side of things, which finally led to me making the decision to study Computer Science at University.
I was sad to leave Pharmagraph, and my time there was a great learning experience; it definitely developed my ability to work with others and communicate effectively, along with my problem solving skills, which are all things I've successfully applied to my career as a Software Developer.
After a couple of years I was promoted to a Quality Assurance role, which mostly involved testing and inspecting the systems before they were shipped to the customer. Luckily, I was still sometimes able to go overseas to perform the installation and commissioning of the systems. I was often expected to go as the only representative of the company, so I had to be able to work independently and solve problems on my own (especially when timezone differences meant I couldn't contact anyone back at the office), while also effectively communicating with the customer and ensuring they were fully satisfied with everything.
I gradually developed more of an interest in the software side of things, which finally led to me making the decision to study Computer Science at University.
I was sad to leave Pharmagraph, and my time there was a great learning experience; it definitely developed my ability to work with others and communicate effectively, along with my problem solving skills, which are all things I've successfully applied to my career as a Software Developer.
Recent Projects
Proprietary Embedded Product
2023
// Rust, KiCad, ESP32, ESP-IDF, ESP-ADF
Designing the electronics and firmware (written entirely in Rust) for an embedded system in a physical product.
I can't say too much about this project as it's proprietary and still in development, but it's a physical product with an embedded system running on an ESP32 microcontroller. The firmware is written entirely in Rust (although there's a fair bit of FFI into C/C++ libraries like ESP-ADF), and the PCB was designed in KiCad.
The device needs to be able to connect to a mobile app via WiFi, so I've had to implement an embedded HTTP server that handles websocket connections for streaming data to/from the app. At the same time, the device needs to handle interrupts from physical inputs while controlling various outputs.
The device needs to be able to connect to a mobile app via WiFi, so I've had to implement an embedded HTTP server that handles websocket connections for streaming data to/from the app. At the same time, the device needs to handle interrupts from physical inputs while controlling various outputs.
Pangea Hackathon Entry
2023
// Rust, Pangea, MicroPython, Raspberry Pi, RP-Pico-W, MQTT, NATS
Proof of concept for a distributed IIoT (Industrial IoT) system, only had one week to work on it.
The only requirement for this hackathon was to use Pangea's API for some aspect of the project. Pangea is a security-as-a-service platform, and I decided to use its tamper-proof audit log to store readings from (real and simulated) sensors. The tamper-proof audit log isn't intended for this purpose, but tamper-proof logging of sensor readings is a requirement for 21 CFR Part 11 compliance, and Pange's merkle-tree based audit log seems to fit the criteria.
It's the sort of situation where I think the distributed nature of wasmCloud + NATS would be a good fit, as you can have multiple wasmCloud servers in a facility which use NATS to form a self-healing lattice network that connects to a more global lattice network.
Ideally, you'd also leverage Matter/Thread (or something similar) to form a mesh network between the sensors, but I wasn't able to get this far because I was late to the hackathon and only had a week to work on it. I would definitely like to pick this project back up in the future, or something similar at the very least.
For more information about the actual architecture and implementation, see the GitHub repo.
This project is also the first time I used Pangea's API, which I actually ended up using in a professional project.
It's the sort of situation where I think the distributed nature of wasmCloud + NATS would be a good fit, as you can have multiple wasmCloud servers in a facility which use NATS to form a self-healing lattice network that connects to a more global lattice network.
Ideally, you'd also leverage Matter/Thread (or something similar) to form a mesh network between the sensors, but I wasn't able to get this far because I was late to the hackathon and only had a week to work on it. I would definitely like to pick this project back up in the future, or something similar at the very least.
For more information about the actual architecture and implementation, see the GitHub repo.
This project is also the first time I used Pangea's API, which I actually ended up using in a professional project.
Cosmonic Hackathon Entry 🏆
2023
// Rust, NATS, webassembly, wasmCloud, Cosmonic, SurrealDB
A distributed vulnerability scanner which leveraged wasmCloud/Cosmonic, won 1st place.
This project was by no means finished, and more of a proof-of-concept. It was my first time using something like wasmCloud and I wanted to see exactly what it was capable of, and I wanted to experiment with SurrealDB because at this point I'd only used SQL databases such as PostgreSQL. I don't actually believe this is a use case that requires a cloud-base solution, let alone a distributed one (so I probably won't ever finish this project), but it was a good learning experience.
Something I really liked about SurrealDB was how easy it was to set up an authentication system that's run entirely on the database itself. This removes the need to maintain a separate authentication server/middleware and allows multiple services to easily share the same in-house authentication system, which is very convenient for a distributed constellation of microservices.
NATS messaging was used to orchestrate the tasks performed by different types of scanners and to handle the aggregation of their results. This facilitated the modular architecture of the system, allowing new types of scanners to be added without having to modify any existing code; a new type of scanner simply needs to subscribe to the appropriate NATS topic.
For more information, see the GitHub repo.
Something I really liked about SurrealDB was how easy it was to set up an authentication system that's run entirely on the database itself. This removes the need to maintain a separate authentication server/middleware and allows multiple services to easily share the same in-house authentication system, which is very convenient for a distributed constellation of microservices.
NATS messaging was used to orchestrate the tasks performed by different types of scanners and to handle the aggregation of their results. This facilitated the modular architecture of the system, allowing new types of scanners to be added without having to modify any existing code; a new type of scanner simply needs to subscribe to the appropriate NATS topic.
For more information, see the GitHub repo.
Education
Computer Science BSc
|
Swansea University
2019 - 2024
Graduated with a first class degree
Total average of 84.3%
Results:
Results:
- Year 0 (Foundation Year)
- Introduction to Programming (Python) - 90%
- Computational Problem Solving - 90%
- Computational Probability - 92%
- Technologies for Information Presentation - 94%
- Fundamentals of Robotics - 84%
- Computers Unplugged - 94%
- Fundamental Geometry - 91%
- Fundamental Mathematics - 81%
- Year 1
- Programming 1 (Java) - 90%
- Programming 2 (Java) - 83%
- Professional Issues 1: Computers and Society - 84%
- Professional Issues 2: Software Development - 77%
- Concepts of Computer Science 1 - 89%
- Concepts of Computer Science 2 - 88%
- Modelling Computing Systems 1 (Discrete Maths) - 76%
- Modelling Computing Systems 2 (Discrete Maths) - 74%
- Year 2
- Introduction to Human-Computer Interaction - 86%
- Concurrency - 82%
- Computer Graphics - 88%
- Automata and Formal Language Theory - 92%
- Declarative Programming (Haskell) - 89%
- Software Engineering (Java) - 85%
- Database Systems - 88%
- Algorithms - 95%
- Year 3
- Cryptography and IT Security - 80%
- Big Data and Machine Learning - 86%
- Web Application Development - 82%
- Brain-Inspired Artificial Intelligence - 79%
- Embedded System Design - 85%
- Introduction to Video Games Programming - 89%
- Final Year Project Implementation and Dissertation - 75%
- Final Year Project Specification and Development - 72%