The camera was going to shake constantly. The sunlight would change, making a yellow line look different at noon from the way it did at dawn.
And what would happen when leaves blew onto the line, covering a part of it? Would the app interpret that break in the line as a parked car, and signal the runner to stop?
“We take examples and feed them into the model, classifying the pixels as one class and everything else as not in the class,” Ayalon said, referring to obstacles that might block the view of the line. “The model learns over time.”
So does the runner. Panek tested the technology for months over short distances, slowly gaining confidence, learning to trust the directional messages in his ears. Then, in November, it was time for a 5-kilometer run.
“Liberation is a huge motivation,” he said, “the idea of being self-reliant.”
Working with New York Road Runners, the organizer of the New York City Marathon, technologists received permission to paint their yellow line around the north loop of Central Park, a 1.42-mile circle that includes the climb known as Harlem Hill.
Despite the cold, Panek wore short sleeves. He has the wiry build of a veteran runner. The only hint of his sight loss is that his eyes sometimes appear to focus in different directions. But he adeptly compensates, following a voice and picking up on people’s unique sounds, looking toward them as he talks.
As noon approached, he was ready to run.
“Let’s go,” he said when it was time.
A starter told him to go, and he was off. He sprinted downhill toward his first turn as though he knew where he was headed. And then, about a minute in, the voice in Panek’s headset — as well as everyone around him — told him to stop. A car from the Parks and Recreation Department was parked on the line.