Dr Adam Stanton

Researcher in Adaptive Informatics

2018-06-06
by as
0 comments

Why do robots look like animals and humans?

File 20180604 175411 1hvd1gi.jpg?ixlib=rb 1.1
Many of the most advanced robots are inspired by nature.
US Department of Defense

Adam Stanton, Keele University

Boston Dynamic’s dog-like SpotMini robot is to go on sale in 2019. This cute and uncannily realistic canine-bot is just one of many robots that are inspired by the natural world. Human engineers increasingly look to living systems for clues to a good design, whether it be emulating an insect brain’s ability to navigate or building robots with bacterial stomachs that produce electricity.

In the video below, you can see SpotMini lean backwards on birdlike legs, counterbalance the weight of the heavy door and smoothly pull it open. The action is taken with a kind of animal grace, and for a moment its artificial origin seems to fade away. But why do we robot engineers base so many of our designs on animals? Is a dog-like robot the only sensible way to accomplish the tasks that SpotMini has been built to achieve, or are we just taking a shortcut and stealing from the natural world?

The answer is of course, both. To understand this we must think about how nature’s design came about, and also about what we want our robots to do.

The modular and symmetrical body that most animals have is a remarkable feat of natural design. It was this layout that enabled, during “the Cambrian explosion” 500m years ago, the vast diversity of complex animal forms we see today.

The evolutionary benefit of bilateral bodies has given animals with this form adaptive advantages over most rival configurations. Traits such as balance and a sense of front and back are inherent aspects of the design. Legs with hips and knees are a relatively small extension that massively increase range and capability. These attributes give animals precise control and they are the foundations of a general intelligence, allowing creatures to navigate and explore new environments and difficult terrain. That’s why nearly every animal today conforms to the plan.

Nature’s other great unifier is her efficiency. Every adaptation that could improve a species’ use of energy was explored, and wasteful variants swiftly out-competed by a thriftier cousin. We can see it in the poise of a cat’s jump and the precision of a fish’s dart, even in the rhythm and bounce of our own walking.

Animals are remarkably efficient, and adaptable to new situations and conditions. Robot designers want their creations to be similarly capable. After all, the fundamental constraints that nature has been working with over billions of years still apply, whatever the purpose is of the robots we create.

Navigating the human world

But unlike most animals, we want our robots to be effective not just in the natural environment, but also within the human domain. This means that we create robots suited for a world designed by humans.

Humans are animals, and we operate according to the properties of our bodies. The prehistoric world shaped us. Natural selection favoured our limbs, eyes, hands and even our sense of direction over long-extinct competitors.

Boston Dynamics’ human-like robot
Wikimedia/Kansas City, CC BY-NC-SA

Today, the world we’ve constructed reflects this history. People rarely stop and think about it, but our evolutionary heritage is actually encoded in our doors and staircases, our signs and signals, our cupboards and our corridors. We have designed these objects around our own physical characteristics. The closer a body is to a human’s, the better it will navigate and manipulate a human world.

The clear parallels between robots and living things in physical design and behaviour invite us to wonder why these machines should be so lifelike. We should remember that, as we try to build machines that operate in our worlds of culture and prehistoric survival, we impose on them the same constraints that those worlds imposed on us. These constraints leave engineers surfing in nature’s wake, marvelling at her creativity and efficiency. And, as we demand more of our machines in the human world, it shouldn’t be surprising that they often begin to look more and more like ourselves. Whether we make a conscious choice to copy nature, or try to design an effective machine from first principles, the results are likely to be the same.

The ConversationAs SpotMini slips on a banana skin in a comedy fall, I laugh and sympathise with it in equal measure. Our robot designs might sometimes seem to be simple cheats stolen blindly, or even superficial pastiches of natural forms appropriated for purely aesthetic reasons. But imitation really is the sincerest form of flattery. In the case of robotics, it is a deep and respectful acknowledgement that nature’s way is hard to beat in any circumstance.

Adam Stanton, Lecturer in Evolutionary Robotics and Artificial Life, Keele University

This article was originally published on The Conversation. Read the original article.

2018-05-17
by as
0 comments

How can I make my email arrive only once per day?

The postman comes to my house once per day – if that. My email comes all the time, and it’s bad for productivity and focus. Happily, it’s possible to change that with gmail-based email in a very effective way.

Three things need to happen to achieve it:

  1. Incoming email needs to be moved to a holding area until it’s postman time and your messages get delivered.
  2. The holding area should be hard to access, otherwise you’ll sneakily check your email too often!
  3. The postman should deliver your messages (move them from the holding area to the inbox) according to some schedule. For me, that’s once per day at 0800.

To achieve the first two, create a new label called __deferred in gmail:

Then, make sure it’s hidden from you as well as possible:

Then create a new filter to move new mail to the deferred folder (here I use anything that’s not from me as the criterion):

Now, all you need is a way to schedule moving the emails from “deferred” to inbox when you want your postman to visit (point three, above). This requires you to create a script that runs on a schedule, using Google Scripts.

Rather than copy it out, I refer you to the original source of this tutorial. Scroll down to “Creating the Google Script” and follow the instructions. One slight change is in accessing the triggers, just press the highlighted button:

2018-04-26
by as
0 comments

How can I make D&D more nerdy?

Here’s a bash function for rolling the dice!

d() { if echo $1 | egrep -q "[0-9]*d[0-9]+($|(?:(?:\+|\-){1}[0-9]+$){1})"; then eval $( echo $1 | awk -F'[d+-]' '{print "n="$1";s="$2";m="$3}' ); t=0; for (( c=1; c<=$n; c++ )) ; do v=$((1+RANDOM%$s));echo $v; let t=t+v; done; if [ -z ${m} ]; then echo "Total: $t"; else if echo $1 | egrep -q "\+"; then echo "Total: $t+$m=$((t+m))"; else echo "Total: $t-$m=$((t-m))"; fi; fi; else echo "Incorrect dice code format.";echo; fi }

Here's an example use:
mbp:~ user$ d 1d20; d 4d8+4
19
Total: 19
2
1
4
6
Total: 13+4=17

2016-06-28
by as
0 comments

Google’s Project Tango – is it a gimmick or is it something with real promise for the future?

At its best, computing technology has the potential to make our imaginations come alive. Whether simple asteroid-blasting arcade games or super-scale simulations of the distant reaches of the cosmos, our experiences of the world have extended way beyond the everyday. The hard division between the real and the fantastic has blurred and Augmented Reality (AR) technology takes this idea to a very practical conclusion: live, digital mash-ups of real spaces and virtual objects.

Google’s Project Tango has potential to put this technology in the pockets of everyone with a phone or tablet (currently it’s available on Lenovo’s new phone only). A ghostly hidden realm, viewed through the unreal ‘torchlight’ of a location- and orientation-aware device, is revealed by mapping out physical space using the camera and projecting a purely virtual dimension on top of the image.

Useful applications are easy to think of: a good indicator of the potential for future widespread adoption. Furniture catalogues with items that appear already placed in their intended locations; museum exhibits that come to life; virtual store attendants that direct you through the shop; and videogames whose theatres of play are the nooks and crannies of your own home. The myriad possibilities for creative expression and practical application are as varied as they are exciting.

However, any new technology arrives with a cautionary note: the history of invention is littered with casualties that once showed promise but, for whatever reason, didn’t deliver. Consider the first flush of Virtual Reality (VR) in the early 1990s as an example: although feted by technologists and a media staple, ultimately it was a victim of underdeveloped hardware and high cost. An inspiring idea was trapped in an expensive, lumbering and ugly package that delivered much less than it promised.

In contrast, today’s phones are powerful and ubiquitous. Developers who want to get involved need only download a package to begin creating apps using the technology. The low barrier to entry for consumers and creators makes novel user-interface paradigms like AR an easy sell, if only to satisfy curiosity. On its own this is already exciting, but Google also has a longer-term ambition: achieving widespread acceptance of its soon-to-be-released dedicated AR hardware, Google Glass 2.

The success of this platform will depend largely on the availability of AR-enabled applications, so a cheap-and-cheerful substitute today paves the way to more dedicated consumer hardware in the future by providing a ready-made app ecosystem. The uptake of dedicated hardware in turn produces novel applications and cements the technology in the public consciousness. Boosted by similar products from other vendors and with overlapping consumer VR technologies enjoying their own surge in popularity, it’s likely that far from being a gimmick, this technology has potential to stick around for quite some time to come.

2016-05-17
by as
0 comments

National Computing

I recently had the good fortune to gain access to the most powerful (publicly known) computer in the UK – ARCHER. This Cray XC30 has 118,000 processing cores and is a workhorse for many large scientific projects requiring massive parallel data processing. My requirements are somewhat more modest, but it was still an interesting experience to wrangle the supercomputer (well, a mere 20 of its 5000 24-core processing nodes) and try to apply its awesome power to some Artificial Life.

As you might expect everything is geared for super parallel MPI jobs so my hand-rolled TCP-based task farm was a bit out of place. Parallel (compute) nodes are unable to break out of their network to get new work units, background processes for the task master are forbidden on login nodes and the functioning of inter-node networking is opaque. Given this I opted for a within-node task farm, running 23 (later 47, with hyperthreading) workers and one master process on a single node, communicating across the loopback interface. By requesting multiple nodes via an array job I was able to launch 20 such runs with a single job script.

The sheer scale of this parallelism dwarfs anything I’ve had access to up to now: normally I get 3-4 workers per job, not 47. The increase did not disappoint; runs that normally take 3-4 weeks are done in 2-3 days. The only slight downsides are the contention – this is a busy system, my first 24h job waited for 58h before kicking off – and the limited job run time (24h maximum for a standard job). In practice this meant only a slight upgrade of my task farm to handle restarts more gracefully – a change long overdue anyway.

2016-05-16
by as
0 comments

How can I close a tcp/ip socket without killing the owning process?

Problem: a program has a socket open and you want to force it to close without killing the owning process.

Solution: get the file descriptor of the socket, debug the process and manually call close on the file descriptor.

On Linux systems:

  1. Find the offending process: netstat -np
  2. Find the socket file descriptor: lsof -np $PID
  3. Debug the process: gdb -p $PID
  4. Close the socket: call close($FD)
  5. Close the debugger: quit
  6. Profit.

From here.

On OSX the incantations are differently formed:

  1. Find the offending process and file descriptor: lsof -ni TCP
  2. Debug the process: lldb -p $PID
  3. Close the socket: call (int)close($FD)
  4. Close the debugger: quit
  5. Profit.

2016-05-13
by as
0 comments

How can I close a frozen SSH session?

As previously posted by Infertux here, a frozen SSH session when moving between networks is a common annoyance for anyone who works on remote servers. Thankfully there’s a quick fix (and a whole set of interesting commands) of which I wasn’t previously aware. These SSH escape sequences allow you to control the SSH client and gracefully disconnect, rather than closing terminals or killing processes. The specific incantation is as follows:

  1. [enter]
  2. ~
  3. .


And then viola! your SSH disconnects and you’re back to your local shell. ~? will list the other escape sequences – see link above for more details.

2016-02-12
by as
0 comments

Is Overleaf any good?

Recently I have been using Overleaf as a replacement LaTeX editor, instead of my previous favourite LyX. It’s a nice environment to work in and the collaborative and hotdesking benefits outweighed any inertia I felt about moving to a remote cloud app. The principal downside is the lack of a decent UI for formatting maths: LyX still wins by a country mile in that metric but it’s easy enough to paste in formulae after writing them locally.

All that’s required now is a mashup with something like Flowstate to actually get some productive work done.

2016-01-31
by as
0 comments

openFrameworks

I was introduced to openFrameworks during my talk at at Reasons 2015 and reminded of it recently by algorithmic virtuoso Kelcey Swain. Joyously I have finally found the time to play! I’m really pleased to have done so too – the library wipes out all the hassles of working with 2D and 3D graphics in C++; it works great on MacOS and Linux, and more importantly it integrates well with my previous work on 3D Virtual Creatures. Props to James Acres whose stencilled shadow maps really make the 3D come to life with minimum effort.

2016-01-30
by as
Comments Off on Post #1

Post #1

This small site is a handy notebook for me and a landing spot and archive for visitors who wish to find out a bit more about my work. To this end there are dedicated pages for current projects and published work, and also a front page which may feature the occasional opinion or interesting link in blog form.