How do you create autonomous robots that can investigate under the sea?

Wednesday 18th May 2022, 12.30pm

How do you retrieve data from sensors embedded in underwater settings – such as those monitoring ecosystem change, for example? Well, when human divers aren’t an option (which is often the case) it’s over to the autonomous robots! In this episode of the Big Questions Podcast we speak to Prof Nick Hawes from the Oxford Robotics Institute about the challenges – and possibilities – that such robots bring to the field.

Read Transcript

(Music)

Emily Elias: Sending an autonomous robot, out on a mission to collect data from sensors is tricky. Now add water to the situation and you’ve got yourself a real challenge.

On this episode of the Oxford Sparks Big Questions podcast we’re looking at smart, decision making robots and we’re asking, how do you create autonomous robots that can investigate under the sea?

Hello I’m Emily Elias and this is the show where we seek out the brightest minds at the university of Oxford, and, we ask them the big questions.

And for this one we’ve reached a fan, of both robots, and decision-making sequences.

Nick Hawes: Okay, my name is Nick Hawes, I’m an associate professor in the Oxford Robotics Institute which is part of the Department of Engineering Science at the University of Oxford.

I run a group there called GOALS which is the Goal Orientated Autonomous Long-Lived Systems group, and in that group we work on decision making for robots and other autonomous systems.

Emily: Okay, so one of these projects that you have been working on required you to look at building a robot to go underwater to look at the bottom of Loch Ness.

Now I think it’s pretty safe to assume you were not looking for monsters, so what was the problem you were trying to solve?

Nick: So, the problem we were trying to solve was, how do you get data back from sensors embedded in underwater settings?

So we weren’t exactly looking at the bottom of Loch Ness, what we were doing was developing decision making techniques for robots that will retrieve data.

The background really is that there are underwater sensors that you use in all sorts of different contexts. So particularly for monitoring underwater ecosystems, monitoring for climate change, people typically want to know about the salinity of water, the temperature, maybe other chemical compositions, light levels, all sorts of things.

And to get that data you typically have to leave sensors in the water and perhaps on the bottom of the ocean, perhaps on the bottom of a loch like Loch Ness, but the challenge then is to get the data back, because whether you’re in the ocean or a loch or a lake, you don’t have the built in infrastructure to connect wires up and just pull the data off directly.

And because you’re underwater, you don’t have access to 3G or 4G signals to just send the data over the air.

So traditionally people would actually go there manually and actually take the data out and replace the hard drives or whatever.

A good case of that, there’s something called the RAPID array, which is a line of sensors over the Atlantic which are used for climate change modelling, and every 18 months a boat has to sail along this line of sensors and someone kind of just picks the data off and puts in a fresh hard drive effectively.

But we don’t want to… We can’t do that, that’s very expensive in many cases, you have to send a human diver somewhere where it’s risky for them to go or just expensive.

Or you have to use a big carbon guzzling boat to get the human there.

That’s a very long-winded way of saying we’re interested in retrieving data from sensor installations under the water using robots.

And so we want to have a robot that can go to these locations, pick up the data and bring it back to the scientists who want to do climate modelling or monitoring ecosystems.

Emily: So, sounds like a simple plan, simple being in air quotes, but what’s so tricky about making this type of robot work underwater?

Nick: So the big challenge with the robot in underwater settings is the huge amount of uncertainty associated with the robot’s position.

So if you want to have a robot that can do this without spending a huge amount of money, you typically end up with a small, low powered, light weight vehicles that have limited ability to control their own direction while they’re flying underwater or kind of swimming, and they have even less ability to know where they are.

So robots on the land when they’re driving around typically use distance sensors to measure their positions relative to local landmarks, landmarks they can observe using laser or cameras, and you’d be used to doing that for example in other ways with GPS. So, GPS uses landmarks, but the landmarks are satellites.

So if you’re a car or a person you typically use GPS, if you’re a robot on the ground you might use GPS, you might also use lasers or sonar or radar.

Underwater none of that works. You might use some kind of acoustic signal bouncing off some kind of sonar, bouncing off the bottom of a lake or the ocean, but it’s expensive and challenging to do that well.

So you have this huge amount of uncertainty over the position of the robot.

And you also have really uncertainty of the value of the data, so these sensor networks, you don’t know whether the data that’s on them is going to tell you anything interesting.

So that means you both have to make decisions that optimise your uncertainty, so, kind of, minimise how lost you are under the water, and doing that while trying to maximise the amount of value you get as a scientist or as a robot explorer by trying to visit nodes underwater that give you the most valuable data back.

Emily: And so, these autonomous robots are making their own decisions on which sensor to go to and prioritise.

What’s your role in helping them decide what to do since the goal is that they are autonomous?

Nick: Our role in doing this has been sort of, two fold.

One has been finding ways to model this problem and exploit the work of our collaborators to build a reliable system and then once we’ve got that model we’ve developed algorithms to optimise robot behaviour within that model.

So to talk about, perhaps to say a little bit about the project first.

This project is called HUDSON and it is being done in collaboration with the National Oceanography Centre along with Heriot-Watt University and Newcastle University.

So these partners have developed parts of the system. So, Heriot-Watt developed the sensor nodes, these very low power things that sit underwater in some kind of network. So that typically means that they can talk to each other, but they can’t talk outside of local area under the water.

Newcastle have developed an acoustic modem. So, a modem is a device that converts digital signals inside a computer or one of these sensor nodes into an analogue signal that can be broadcast, and then it also does reverse. So, it can detect the analogue signals and reverse them into a digital signal.

So it’s what computers used to talk to the internet back in the nineties and early noughties.

Emily: That whole, cha-char, cha-char, owowowo.

Nick: Yep, that’s exactly the right noise yes.

Emily: So that sensor’s at the bottom of a body of the water, and it’s just making the sound and-

Nick: And then the National Oceanography Centre helps us put everything together. They provide the vehicles, the vehicle called an ecoSUB, which is the vehicle we’re putting all of this in, in the water vehicle.

And the model we’ve been using here has actually been two-fold, so we used the fixed sensors underwater to give us data.

We also use them to help localisation. Because we know the position of the sensors underwater, and they each have a unique signature which will broadcast over the acoustic signal. When the ecoSUB gets near to these sensors, the signal they give off gives the robot a clue about where it is. So, you can start to think about this as a kind of form of underwater GPS.

So instead of having GPS satellites to that allow you to synchronise your device to work out where you are on earth, under the water you’ve got signals coming off all these underwater beacons which are kind of part of the sensor network.

So our robot knows if it goes closer to particular sensors, it can improve its knowledge of its location. And it also starts to get a little bit of information about what kind of data is on those sensors. The sensors can do a little bit of on-board processing to work out whether the data is interesting or not.

So we took all of these kind of different parts and built a computational model that describes the evolution of the system over time.

So we use a model called a Markov decision process. So this is a way we can describe the probability of things happening after the robot takes particular actions, and then we’ve built a model on the robot that uses this Markov decision process to decide the kind of, the appropriate sequence of actions that allows the robot to both know where it is when it’s underwater and also maximise the amount of data it can get within a certain timeframe.

So these robots have limited battery life and you want to maximise the amount of data you get back within this battery life whilst also minimising the probability that you get lost.

So if the robot gets too lost and gets too far away from these underwater beacons, it can effectively just sort of go off into the loch somewhere and never come back.

Well, never come back is a bit extreme, typically what happens is once it knows it’s really lost, it just surfaces and it floats there until someone comes and retrieves it.

Emily: Is your role kind of like being mission control for the robot when it goes out on its mission so you can keep track that it is doing what it needs to be doing?

Nick: Yes, mission control is a good way of thinking about it. So, when you do these things with a kind of manual system you have a human operator that every time the robot surfaces, it will send the robot off for its next action, or alternatively they script a very predictable path. So, either the robots will do say, lawnmower patterns which is entirely unresponsive to the data or the underwater conditions, or yes you need a human that’s going to act as mission control.

So what we do is we take this model, and we solve the model for some kind of criteria.

Typically, minimise the probability the robot gets lost, maximise the amount of value you get back. And that produces something called a policy, and that policy is a piece of software that maps the state of the robot, i.e., what beacons it can see, how much data it’s got on board, what the time of day is

So we’re thinking about this as a state, it’s like a snapshot of the system.

So the policy maps that snapshot to what action you perform next, and the model of the system allows us to look forward in time to work out what outcomes are probable from particular actions and make sure the robot knows what to do under all those different circumstances.

So rather than having to radio back to base and say, “Mission control, what should I do?”, the robot looks inside, into its memory, where it has this policy and goes “Okay, I can see these two beacons, I’ve got this much data, so I’m going to go visit this next beacon next.”, for example.

Emily: And have you guys been able to get this to work?

Nick: (Laughter)

Yes, we’ve been able to get it to work. We’ve had two trips, so an amazing student of mine called Matt Budd has been leading the development work from our end.

He’s been to Loch Ness twice, the first time they went they had a very frustrating week where it turned out there that there was a large thermocline, so difference in temperature across the loch. And a thermocline interferes with acoustic propagation underwater and that really just stopped all the nodes talking to the vehicle and vice versa.

So, (Laughter) they spent a week sending the robot off and not getting the robot back or waiting long times for nothing good to happen really.

They went back again in the autumn and had a much more successful time.

So we’ve got lots of examples of the robot collecting data from these settings and importantly for us, our science, the work we do is about developing these models and algorithms, we want to be able to show that by using our approach it’s better than using a baseline approach, which would be for example, a hand scripted mission that a person would have written and given to the robot.

So we were also able to demonstrate that by using this planning technique, this mission control technique that we developed, you get more data in less time.

So not only does it work, it works better than, let’s say, the traditional alternatives.

Emily: Is there a bigger application you can see for it in the future?

Nick: Oh yes, I mean there’s tons of things you can do with this.

So we can sort of separate two things. One is applying this to marine vehicles, and the other is the kind of general techniques.

So the general decision making under uncertainty techniques we’re applying to all sorts of different areas.

So we were looking at using them on robots that perform ultraviolet disinfection under time limits, so there’s a lot of uncertainty on where robots are positioned. So, this is sort of ground robots cleaning things, that’s an interesting area.

We’re using the same decision-making techniques for robots to explore nuclear disaster zones or monitor nuclear facilities. So there the uncertainty is what hazards are present and how successful the robot is navigating.

And then in the marine environment we’re working with National Oceanographic Centre again and the British Antarctic Survey to develop a much larger scale version of these missions to send gliders, which are these underwater gliders that sort of swim up and down in the water to… They can travel long distances for very little energy.

We’re developing techniques to manage a fleet of gliders to gather climate relevant data from the Southern Ocean, that’s where we want to go with this. We want to have robots monitoring the Southern Ocean, gathering loads of data.

There are fixed sensor buoys that the robots can read data from, they can also do experiments themselves, sample data themselves.

So we’d like to yet scale this up to a fleet of these robots surveying oceans.

Emily: That’s got to feel pretty cool.

Nick: (Laughter)

Yes, it’s nice to be, yes. I mean robots can be used for all sorts of things, helping other scientists is a great, does feel good to be doing it, sure.

Emily: This podcast was brought to you by the Oxford Sparks from the University of Oxford, with music by John Lyons and a special to Nick Hawes.

We are on the internet at OxfordSparks.ox.ac.uk, or you can hello to us on whatever social media platform your heart desires we at @oxfordsparks, that’s our name, don’t wear it out, we’d love to hear from you.

I’m Emily Elias. Bye for now.

(Music)

 

Transcribed by UK Transcription.