Blog 6 - The Lock In

Blog 6 - The Lock In

Introduction

As agreed by the team in Week 9 of this semester, the final iteration for the IoT Applications module would take place over two consecutive days. The aim was to bring to life some of the concepts explored during the brainstorming session outlined in Blog 5.

The "Lock-In" concept was proposed by Jason Berry, our lecturer, as a way to put the project on temporary hold and allow everyone to focus on wrapping up other modules. We would then regroup and devote our full attention to IoT Applications. Held on the 21st and 22nd of May, the Lock-In gave us just enough time to develop a clear plan, divide into small teams, complete our assigned tasks, and merge everything into a single, cohesive artefact to showcase to Jason on the final day.

This blog will serve as a chronological reflection of the two-day Lock-In, from planning through to presenting the final prototype. I'll include diagrams, code examples, and discussion points explaining the why and how behind the decisions we made.


Figure 1 - Final Infrastructure



Figure 2 - Desired Infrastructure

Lock-In Day 1

We kicked off the first day with a group meeting, where Jason laid out his vision for the final deliverable. As our "client", he outlined a concept for an IoT artefact that incorporated both analogue input and output something we had touched on earlier in the semester. He emphasised that the artefact should blend hardware and software components and interact with the cloud, embodying the principles of a modern IoT system.

With this direction in mind, we began planning the final iteration. We opted for a Kanban system to manage the workload an approach well-suited for a multidisciplinary team like ours. Using Kanban allowed us to assign tasks based on each member's strengths and interests, ensuring nobody was left idle or mismatched. We populated the Kanban board with key tasks and split into smaller teams to tackle different elements of the production lifecycle. Given the tight two-day timeframe, only the most essential items made the cut.

The system architecture we developed centered around streaming heart rate variability (HRV) data from a micro:bit sensor through the cloud to various endpoints. The data would flow from the micro:bit, get processed and transmitted via WebRTC, and ultimately be visualized in both Unity applications and web interfaces. Our goal was to create a seamless pipeline that could handle real-time biometric data streaming.

My Contributions - Cloud Infrastructure

My primary responsibility was setting up and managing the cloud infrastructure that would serve as the backbone for our real-time data streaming system. This involved deploying and configuring two critical EC2 instances on AWS.

Signaling Server EC2 Instance: The first instance I set up hosted our WebRTC signaling server, which acted as the coordination point for peer-to-peer connections. This server was essential for establishing the initial handshake between data senders and receivers, managing ICE candidate exchanges, and facilitating the WebRTC connection setup. I configured the instance with the necessary security groups to allow WebSocket connections on port 8080, ensuring our signaling protocol could operate smoothly.

Frontend EC2 Instance: The second instance I deployed hosted our web frontend application. This server served the HTML interface that users could access to visualize the incoming HRV data in real-time. I ensured this instance was properly configured with the correct networking settings to communicate with both the signaling server and receive WebRTC streams from the data sources.

The cloud setup was crucial because it allowed our system to scale beyond local network limitations and provided a stable, accessible platform for demonstration. By hosting both the signaling infrastructure and frontend in the cloud, we created a truly distributed IoT system that could be accessed from anywhere.

Technical Implementation

Our technical stack brought together multiple technologies to create a cohesive system. The micro:bit collected HRV data and transmitted it via serial communication to a Python script running on a local machine. This script processed the raw sensor data and converted it into audio streams using WebRTC protocols.

The WebRTC implementation was particularly challenging, as we needed to establish reliable peer-to-peer connections through our cloud-hosted signaling server. The system used WebSockets for the initial signaling phase, allowing clients to exchange offers, answers, and ICE candidates necessary for establishing direct connections.

On the receiving end, we developed both web-based and Unity applications that could consume the real-time audio streams. The Unity application featured a visual heart animation that pulsed in response to the incoming heartbeat data, creating an engaging real-time visualization of the biometric information.

Lock-In Day 2

Day 2 began with a brief team check-in to assess progress and identify any blockers. The focus shifted to integration and testing, as we worked to merge our individual components into a working prototype.

One of the major challenges we encountered was getting the audio streaming to work reliably through WebRTC. While our local testing had been successful, the integration with the cloud infrastructure presented new complexities. The team spent considerable time debugging connection issues and ensuring that data could flow seamlessly from the micro:bit, through our cloud services, and into the visualization applications.

We made strategic decisions to prioritize stability over complete feature implementation. Rather than rushing to include every planned feature, we focused on creating a solid proof-of-concept that demonstrated the core functionality. This approach proved wise, as it allowed us to deliver a working system that showcased the key principles we'd learned throughout the semester.

Team Collaboration

One of the standout aspects of this project was the collaborative approach our team took. Rather than working in strict silos, we maintained constant communication and helped each other overcome technical challenges. The cloud infrastructure I set up became a shared resource that enabled other team members to test and integrate their components effectively.

The Unity team consistently coordinated with me to ensure their applications could properly connect to our cloud-hosted services. Similarly, the hardware and WebRTC teams relied on the cloud infrastructure to test their implementations under realistic conditions. This interdependence created a true team effort where everyone's contributions were essential to the final outcome.

Results and Reflection

By the end of the Lock-In, we had successfully created a working prototype that demonstrated real-time HRV data streaming from a micro:bit sensor through cloud infrastructure to multiple visualization endpoints. While we didn't achieve every aspect of our original ambitious plan, we delivered a solid proof-of-concept that showcased our technical skills and ability to work effectively under pressure.

The cloud infrastructure proved robust throughout testing and demonstration, successfully handling the real-time data streaming requirements. The EC2 instances performed reliably, and the network configuration supported the WebRTC connections without significant latency issues.

Future Development

The system we built provides an excellent foundation for future development. The modular cloud architecture makes it straightforward to add new data sources, visualization methods, or processing capabilities. The WebRTC implementation could be enhanced to support multiple simultaneous connections, and the visualization applications could be expanded with additional features.

From a cloud perspective, the current setup could be enhanced with auto-scaling capabilities, load balancing, and more sophisticated monitoring. The infrastructure is designed to be extensible, making it relatively simple for future teams to build upon our foundation.

Conclusion

The Lock-In experience successfully brought together everything we'd learned throughout the semester about IoT systems, cloud computing, real-time data processing, and team collaboration. While we didn't implement every feature we initially envisioned, we created a functioning system that demonstrates the core principles of modern IoT architecture.

"A Bigger Heart" proved to be an apt name for our project – not just because of the heart rate monitoring functionality, but because it reflected our team's collaborative approach and willingness to support each other throughout the intensive development process. The project showcased how diverse technical skills can come together to create something greater than the sum of its parts.

Looking back, the Lock-In provided valuable experience in rapid prototyping, cloud deployment, and working under tight deadlines – skills that will prove invaluable in future IoT projects and professional development.

Comments

Popular posts from this blog

Blog 2 - HRV Demo

Week 1: HRV Review

Lock In Day 1 (20/05/25)