Blog 6 - Lock-In (Final Iteration)

Blog 6 - Final Iteration of "a Bigger Heart"

Structure Diagram

Figure 1 Structure Diagram

Video Demo


As agreed by the team in Week 9 of this semester, the final iteration for the IoT Applications module would take place over two consecutive days. The aim was to bring to life some of the concepts explored during the brainstorming session outlined in Blog 5 – "The Bigger Heart" Prologue.

The "Lock-In" concept was proposed by Jason Berry, our team’s senior lead, as a way to put the project on temporary hold and allow everyone to focus on wrapping up other modules. We would then regroup and devote our full attention to IoT Applications. Held on the 21st and 22nd of May, the Lock-In gave us just enough time to develop a clear plan, divide into smaller teams, complete our assigned tasks, and merge everything into a single, cohesive artefact to showcase to Jason on the final day.

This blog will serve as a chronological reflection of the two-day Lock-In, from planning through to presenting the final prototype. Alongside the summary, I’ll include diagrams, code examples, and discussion points explaining the why and how behind the decisions we made.

Figure 3 The IoT Applications Team
Figure 2 The IoT Apps Team

 Lock-In Day 1

We kicked off the first day with a group meeting, where Jason laid out his vision for the final deliverable. As our “client”, he outlined a concept for an IoT artefact that incorporated both analogue input and output something we had touched on earlier in the semester. He emphasised that the artefact should blend hardware and software components and interact with the cloud, embodying the principles of a modern IoT system.

With this direction in mind, we began planning the final iteration. We opted for a Kanban system to manage the workload an approach well-suited for a multidisciplinary team like ours. Using Kanban allowed us to assign tasks based on each member’s strengths and interests, ensuring nobody was left idle or mismatched. We populated the Kanban board with key tasks and split into smaller teams to tackle different elements of the production lifecycle. Given the tight two-day timeframe, only the most essential items made the cut.

Figure 3 Kanban Board


Figure 4 Example of how WebRTC works


I joined the team responsible for setting up communication between an HRV (Heart Rate Variability) sensor and the cloud, using WebRTC. Having previously worked with WebRTC in my final year project, I was keen to apply and expand those skills in a new context. In addition to the WebRTC setup, I also worked on real-time data conversion specifically, converting HRV sensor readings formatted as JSON into audio for low-latency transmission.

Both components were crucial to the success of the project.

Once everyone understood their roles and deliverables, we got stuck in. I started with a demo I had developed earlier in the semester that showed a basic WebRTC connection between two peers. It was the perfect launchpad for this iteration. The original demo featured three key files:

  1. A signalling server (JavaScript) – acting as the middleman in the peer-to-peer connection, using WebSockets for communication.
  2. A sender script (Python) – initiating the connection and sending data.
  3. A receiver script (Python) – accepting the connection and handling incoming data.

WebSockets were chosen for the signalling server because they allow for lightweight, real-time bidirectional communication. Once the sender and receiver had exchanged offers and answers through the signalling server, the WebRTC connection was established using a UDP protocol. A data channel was then used to transmit data with minimal latency.

With the demo working locally, the next challenge was deploying it to the cloud. I collaborated with Dean, Jay, and Alex, who specialise in cloud computing. We decided to host the signalling server on an AWS EC2 instance. To facilitate this, I created a GitHub repository containing all the WebRTC code. Dean and Jay handled the EC2 setup, and we updated the WebSocket URLs in the sender and receiver scripts to point to the cloud instance. After some adjustments, Alex and I successfully transferred a test PNG image via the WebRTC connection my script sending from one machine and his script receiving on another.

Next, we focused on sending audio data. Initially, we modified the file extension in the data channel to support .mp3 files. While this worked, our goal was real-time audio streaming something that required a shift from using data channels to leveraging the getUserMedia stream. This presented several challenges, and with limited time left on Day 1, I continued development at home, eventually getting the stream working using the local JavaScript signalling server.

On the Unity side of things, Brendan and Mark did great work building an app that featured a visual heart animation one that pulsed in size in response to a soundtrack. This became especially valuable once we got the audio streaming working. The idea was that when a live audio signal representing the heartbeat came through, the heart in the Unity app could react to it in real time. It was a simple but powerful way to visualise what the data represented, and it tied in perfectly with the goal of making both the sound and the rhythm of the heartbeat tangible and engaging

Lock-In Day 2 

We began Day 2 with a short team meeting more of a check-in than a full discussion. The focus was on updating each other on what had been completed so far and figuring out what was realistically achievable in the remaining hours. For me, successfully getting WebRTC to stream audio marked a major breakthrough, and the next step was to integrate that with Andrew’s work.

Andrew had been working on converting HRV (Heart Rate Variability) data into a sine wave format that could then be transmitted as audio. Our goal for the day was to combine our workstreams: taking the real-time converted HRV signal and streaming it through the WebRTC pipeline I had built.

However, merging our code turned out to be more complex than expected. Although I had previously succeeded in streaming .wav files through WebRTC, the system didn’t behave the same way when using the sine wave generated from HRV data. After several hours of trial and error, it became clear to both Andrew and me who were co-leading this part of the project that completing the full implementation within the limited time left simply wasn’t feasible.

Instead of abandoning the effort, we shifted our focus to delivering a working prototype that could serve as a solid foundation for future development. We decided to hardcode a dummy sine wave signal at 440 Hz (the standard A note used for tuning musical instruments). This approach gave us a predictable test signal that could easily be swapped out for the real HRV-based sine wave later. The aim was to create a handover-ready solution for next year’s team, rather than rushing a half-working feature under pressure.

With only about an hour left, we finally found the bug that had blocked us for most of the afternoon. The HTML receiver page was successfully connecting to the WebRTC signalling server (via WebSockets), but it wasn’t receiving any audio stream. We added detailed logging throughout the Python sender and receiver scripts to help track down the issue.

Eventually, the problem revealed itself: the format of the generated sine wave was incompatible with the receiver's expectations. After adjusting the format to signed 16-bit, mono, and a sample rate of 48 kHz, the audio stream finally came through. That final adjustment resolved the issue, and we were able to hear the 440 Hz tone streamed in real time via WebRTC.

Although we didn’t reach the full functionality we’d envisioned, solving that major technical blocker felt like a win. We were confident that the groundwork had been laid for a full integration, and we’d created something that future students could easily pick up and build on.

Discussion

Team Evaluation

One of the best things about this project was how well the team worked together. From the start, we split up based on what each of us was strongest at, depending on the streams we were in Unity for the lads doing game dev, AWS and cloud for those in cloud computing, and the WebRTC and audio work fell to those of us with more of a background in IoT or media.

It wasn’t a strict division either. Even though we had our own areas, everyone mucked in when needed. The cloud lads jumped in to help when we were deploying the signal server, and the Unity team were constantly checking in to make sure what we were sending could be visualised properly. That kind of back-and-forth meant no one was stuck in a corner or doing a task that didn’t suit them. It really did feel like a team effort from start to finish, and I think that’s a big part of why we managed to get as far as we did.

Intended vs Final Architecture

If you compare what we set out to build with what we actually ended up with, there’s definitely a gap—but not in a bad way. The original plan was to take real-time HRV data from the micro:bit, convert that into audio on the fly, send it through a WebRTC connection, and stream it into visual and audio outputs through Unity and React apps. On paper, it was a solid setup.

What we finished with was slightly more pared back, but it kept the core idea alive. The micro:bit still collected data, and we were able to stream dummy sine wave audio through the pipeline, proving the tech worked. The live HRV-to-audio part was swapped out for a placeholder, mainly because of time. But the audio streamed, the apps picked it up, and the cloud-based signalling worked. It wasn’t the full dream, but it was a solid working proof of concept.

Future Use Case

There’s real potential in this project. With more time and a bit of polish, it could easily be developed into something useful in education or even health monitoring. The HRV readings could be used for stress tracking or biofeedback training, where someone sees and hears their heartbeat in real time and learns to control it through breathing exercises.

It could also act as a learning tool for schools or workshops showing students how data travels from a sensor, through conversion, across the cloud, and into an app they can interact with. And because the data is streamed in real-time and visualised, it’s not just a black-box system. You can see and hear exactly what’s happening.

We made sure to build the system in a way that someone else could easily pick up where we left off. The code’s on GitHub, the architecture is mapped out clearly, and we kept things modular so it wouldn’t take much to slot the actual HRV audio back in where the dummy sine wave sits now.


Conclusion

Looking back, this final sprint really pulled together everything we’d learned over the semester working with sensors, doing analogue input/output, cloud infrastructure, streaming protocols, and building something with a team under pressure.

We mightn’t have ticked every single box from the original plan, but we came away with a fully functioning prototype that shows the idea works. More importantly, we’ve handed over something solid for next year’s group to build on. What started as a mad two-day push ended up being a proper showcase of our skills, our teamwork, and our ability to get stuck in and deliver something meaningful.

"A Bigger Heart" was meant to be the name of the artefact, but it ended up being a fair reflection of the way the team approached the whole thing open, collaborative, and with a bit of heart behind it.


Appendix

Signalling state setup code

const WebSocket = require('ws');

const wss = new WebSocket.Server({ port: 8080 });

wss.on('connection', function connection(ws) {
ws.on('message', function incoming(message) {
// Broadcast incoming message to all clients except the sender
wss.clients.forEach(function each(client) {
if (client !== ws && client.readyState === WebSocket.OPEN) {
client.send(message);
}
});
});
});

console.log('Server running on port 8080');§

Receiver script HTML

<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title>Sine Wave Receiver</title>
</head>
<body>
<h2>Streaming 440Hz Sine Wave...</h2>
<audio id="audio" autoplay controls></audio>

<script>
const pc = new RTCPeerConnection({
iceServers: [{ urls: "stun:stun.l.google.com:19302" }]
});

const ws = new WebSocket("ws://54.216.122.197:8080"); // match sender

ws.onopen = () => console.log("[+] WebSocket connected");

ws.onmessage = async (event) => {
const text = await event.data.text();
const data = JSON.parse(text);
console.log("[<] WS:", data);

if (data.type === "offer") {
await pc.setRemoteDescription(new RTCSessionDescription(data));
const answer = await pc.createAnswer();
await pc.setLocalDescription(answer);
ws.send(JSON.stringify({
type: pc.localDescription.type,
sdp: pc.localDescription.sdp
}));
} else if (data.type === "candidate" && data.candidate) {
await pc.addIceCandidate(data.candidate);
}
};

pc.onicecandidate = (event) => {
if (event.candidate) {
ws.send(JSON.stringify({ candidate: event.candidate }));
}
};

pc.ontrack = (event) => {
console.log("[🎵] Received audio track");
const audio = document.getElementById("audio");
if (audio.srcObject !== event.streams[0]) {
audio.srcObject = event.streams[0];
console.log("[🎧] Audio stream set");
}
};
</script>
</body>
</html>

Sender Sine Wave Script

import asyncio
import json
import time
import websockets
import av

from aiortc import RTCPeerConnection, RTCSessionDescription, RTCIceCandidate
from aiortc.mediastreams import AudioStreamTrack


class AudioFileTrack(AudioStreamTrack):
"""
A MediaStreamTrack that reads and streams audio from a file using PyAV,
with real-time pacing based on timestamps.
"""
def __init__(self, path):
super().__init__() # Initialize as audio track
self.container = av.open(path)
self.stream = self.container.streams.audio[0]
self.frames = self.container.decode(self.stream)
self.start_time = None

async def recv(self):
frame = next(self.frames, None)
if frame is None:
print("End of audio stream.")
raise asyncio.CancelledError("No more audio frames")

if self.start_time is None:
self.start_time = time.time()

# Pace the frames according to their timestamp
now = time.time()
expected_play_time = self.start_time + frame.time
delay = expected_play_time - now
if delay > 0:
await asyncio.sleep(delay)

return frame


async def connect_websocket():
uri = "ws://54.216.122.197:8080" # Replace with your signaling server address
return await websockets.connect(uri)


async def run():
print("Connecting to signaling server...")
pc = RTCPeerConnection()
ws = await connect_websocket()
print("Connected to signaling server.")

# Add the audio track
audio_path = '/Users/daniellawton/Documents/IoT_Lock_In/audio/12_DARE_48k.wav' # Change this to your path
audio_track = AudioFileTrack(audio_path)
pc.addTrack(audio_track)
print("Audio track added.")

answer_received = False # <- Track whether we've already handled the answer

@pc.on("icecandidate")
async def on_icecandidate(event):
if event.candidate:
candidate_dict = {
"type": "candidate",
"candidate": {
"candidate": event.candidate.candidate,
"sdpMid": event.candidate.sdpMid,
"sdpMLineIndex": event.candidate.sdpMLineIndex
}
}
await ws.send(json.dumps(candidate_dict))
print("Sent ICE candidate.")

# Create offer
offer = await pc.createOffer()
await pc.setLocalDescription(offer)
print("Sending offer...")
await ws.send(json.dumps({
"type": pc.localDescription.type,
"sdp": pc.localDescription.sdp
}))

try:
async for message in ws:
if isinstance(message, bytes):
message = message.decode("utf-8")
data = json.loads(message)

if data.get("type") == "answer":
if not answer_received:
print("Received SDP answer.")
await pc.setRemoteDescription(RTCSessionDescription(
sdp=data["sdp"],
type=data["type"]
))
answer_received = True
else:
print("Duplicate answer received — ignored.")

elif data.get("type") == "candidate":
candidate_info = data["candidate"]
candidate = RTCIceCandidate(
candidate=candidate_info["candidate"],
sdpMid=candidate_info["sdpMid"],
sdpMLineIndex=candidate_info["sdpMLineIndex"]
)
await pc.addIceCandidate(candidate)
print("Added ICE candidate from receiver.")

except Exception as e:
print(f"Error during signaling: {e}")
finally:
print("Cleaning up...")
await ws.close()
await pc.close()


# Run the sender
if __name__ == "__main__":
asyncio.run(run())

Comments

Popular posts from this blog

Blog 2 - HRV Demo

Week 1: HRV Review

Lock In Day 1 (20/05/25)