Hi, my name is Richard Williams
Computer Vision, Robotics and Software Engineering Lead.

Random Project   Get in touch

About Me

Profile Image

I am a passionate and driven lead engineer with over 10 years experience building complex software for autonomous AI systems. In particular, I have extensive experience with computer vision, high performance software systems and management, gained while working across a range of sectors, including academic research, robots-as-art and food-automation.

In addition to being an innovative engineer, I am known as straight-forward and dedicated leader, taking great pride in my ability to build engineering departments and mentor and uplift junior engineers - as well as learn from them in turn.

This page collects together some of my favourite personal projects, reminding myself of things that I've done and providing a source of inspiration for future work.

View Resume

Projects

Sprung A Leak

This robot-as-art exhibit was completed as an additional project during my PostDoc. The resulting exhibition ran for 6 months at the Tate Liverpool (visited by 140,000 people) before going on tour to Haus der Kunst (Munich, Germany) M Museum (Leuven, Belgium) and Art Tower Mito (Mito, Japan).

Working with a close friend (also studying Robotics) and artist Cecil B Evans, I developed a framework to allow two 'Pepper' robots to perform a play within a large-scale, permanent art exhibition where members of the public were encouraged to enter and freely explore the space.

This framework included a custom navigation system, as well as integration with existing Pepper API, allowing the robots to move freely around the space and perform physical gestures while avoiding collisions with the gallery space itself and the roaming public.

Many late nights in both Tate Liverpool and Beligum were had.

Project Image
   

Jetson Nano Object Detection

This is an example project demonstrating the use of optimised object detection network deployed on a Jetson Nano. The example demonstrates a Mobilenet V2 SSD network, trained on a custom dataset of mini chocolate bars. The network is trained using the PyTorch framework, converted to ONNX format and then deployed on the Jetson Nano using the ONNX TensorRT runtime. This approach results in interface speeds of >30 FPS, allowing the network to be used in real-time applications.

This network was used in a very simple demonstration unit I built for a tradeshows, that uses an Automata Eva robot arm to pick up and place mini chocolate bars. It was quite a challenge especially to get the CSI camera interface working on the Jetson Nano, but it was a lot of fun to build. I hope by sharing it others can avoid some of the pitfalls I encountered and build their own exiting projects.

   

Robocup@Work & RoCKIn@Work

While at the University of Liverpool I competed with fellow members of our robotics lab (smARTlab!) in several robotics competitions. My main contributions were the development of the object recognition framework for textureless objects, and programming behaviours using hierarchical concurrent state machines. We did well as a team winning the 2014 Robocup@Work World Championships as well as the 2015 RoCKin@Work competition.

Shoebot

I recently started a robotics project with my fiance called 'Shoebot'. The aim of Shoebot is to be a supercharged home cleaning robot. Building on top of an iRobot Create 2, we are aiming to imbue Shoebot with some additional capabilities including better navigation so it can clean more efficiently, the ability to water (mist) the indoor plants and, finally, being able to tidy up our shoes which we are forever leaving in random places in the flat - or at least keep a tally on who is the worst culprit.

Project Image

Human Pose Analysis

I recently worked on a challenge aimed at analysing human pose data, in the context of exercise tracking. The aim of the challenge was to analyse videos of a person waving an produce a system capable of tracking wave repetitions, wave percentage and provide feedback on form. The constraints were the use of the MediaPipe Pose library for pose tracking and a lightweight solution with minimal dependencies.

Project Image

LD19 Lidar ROS 2.0 Driver

This project is a ROS 2.0 sensor driver node written in C++ developed especially for a LDRobot LD19 sensor I received for my support of the kickstarter project. The LD19 is a very low cost compact LIDAR sensor with a 12 metre range and a 360 degree FOV, LDRobot provide a basic ROS 1 driver but I wanted to learn ROS 2.0, so decided to write my own driver for the laser. In addition to the driver I've added Github actions to automatically run tests/linters as well as build and deploy a Docker container for the driver. This project served a double purpose, as I wanted to use the driver for ShoeBot, another personal project.

Project Image

Roboclaw Motor Controller Driver

This project is for two drivers I've written for the Roboclaw series of motor controllers. I've used the Roboclaw motor controllers for several personal projects as well as during my PostDoc on the Mitro telepresence robot. The driver is geared around the use case of a differential drive robot base so it includes interface for controlling the base using linear and angular velocity commands as well calculating and publishing odometry. I released both the python and C++ projects as open source for the community to use for their Roboclaw based projects.

Project Image

Contact

Get in touch now!

Contact