The era of the autonomous tractor is upon us, says Danny Mann, head of Biosystems Engineering at the University of Manitoba.
However, there will be limitations. Mann says liability concerns mean that for the foreseeable future, there will have to be some level of human involvement in the technology.
Mann was speaking to an audience at a University of Manitoba seminar on February 15 titled, “Chasing the dream of the autonomous tractor… from a human factors perspective.”
Read Also

CNH launches AI-driven parts identification
Owners of CNH tractors looking for a replacement part can take a photo of the part and upload that photo to a website, which will then identify it and find the necessary replacement.
Mann grew up on a farm near Roblin, and while his career has focused on engineering and hard science, those early years on the farm shaped his approach to engineering, which is largely why he has focused his research on technologies to help farmers.
The quest for autonomous tractors dates back to the 1950s, when the first remote-controlled tractors were developed. However, it was not until the 1990s that the concept of fully autonomous tractors began to gain traction. In 1995, John Deere introduced the first GPS-guided tractor, which was able to accurately navigate fields without human intervention.
Why it matters: Even autonomous equipment requiring some human supervision is going to solve some labour woes for the sector.
Since then, several other companies have entered the autonomous tractor market. These tractors use a combination of GPS, sensors, and machine learning algorithms to autonomously navigate fields and perform tasks such as planting, tilling, and harvesting.
Mann’s research began in the early 2000s. At that time, the technology was relatively rudimentary. Effective wireless technology was still about a decade away.
“It consisted of a camera attached by a long wire to a monitor,” said Mann. “What the camera saw, you’d see on the monitor.”
Because Mann sees a human role in the future of autonomous farm machinery, much of his research has focused on ergonomics. Even in the rudimentary example above, ergonomics came into play.
“We realized the placement of the camera was critical,” he said. A longer angle could slow down the information coming at the user.
An ergonomic evaluation looks at the relationship between input and feedback. In one early example, a linear array of green and red LED lights was used to provide feedback to the driver of a simulator. The green lights were in the middle and indicated the user was on track, and the red lights on the left and right indicated a correction was needed. However, it was discovered that the single linear LEDs and the red/green colour scheme were problematic. “A light bar more readily attracts attention when large clusters of yellow and blue LEDs are used,” said Mann.
The entire industry has come a long way since then. A more current example of that ergonomic evaluation involved a study that detected eye movements while interacting with a remote tractor with a sprayer. The subject had a two-part display. On the left was a video of the sprayer, and on the right was a dashboard of sensor data.
“We looked at the trials in chronological order,” said Mann. In the earlier trials, the subjects’ eye movements indicated they relied on the visual cues from the sprayer, and their eyes were focused on the video about half the time. But as they went through more trials, the subjects got better at reading the sensor data and spent less and less time looking at the video.
“This seemed to indicate that the visual was less important,” said Mann. “But when we surveyed the users, they still wanted the visual information.”
Testing has since moved out into the field, and that has brought a host of concerns. One of those concerns was latency — that is, the delay between what the sensors are recording and when that information reaches the user.
“We developed a system with hardware using Ethernet and cellular data,” says Mann. “In general, the latency was acceptable. It worked really well on campus and pretty well at [Research Station],” he said. But when out in more remote areas, cellular data became unreliable. The Ethernet was still effective, but it illustrates that there are some technical hurdles to overcome.
Mann says there are four different approaches to managing autonomous tractors. With in-field supervision, the operator sits in the tractor, and the automation just aids them in the operation. With edge-of-field supervision, the operator is monitoring the situation from nearby. A farm office supervision system would have the operator at a separate location on the farm (home or office). Outside farm supervision would mean the farmer could be operating the tractor from anywhere in the world. Mann sees edge-of-field supervision as the most likely to catch on, at least in the short term.
Mann says future research in the field could see a focus on auditory feedback. It is something that farmers use frequently in the field and is something he recalls from his days on the farm.
“I vividly remember I could hear the speed of the cylinder, and I quickly learned how to interpret those auditory cues,” he said. He’s unsure about what that might look like and what role it could play, but it’s an idea he’s filed away for future use.
He also sees the idea of remote supervision of multiple tractors as a likely development that could be attractive from an economic standpoint. This doesn’t necessarily mean one person operating a large fleet of tractors. Obviously, there are limits to the number of tractors a user can operate simultaneously, but even if that limit is low, it could still be beneficial.
“Currently, there is one operator for every tractor,” says Mann. “If we go to one user operating two tractors, we’re still saving labour.”