Part I

The first car was electric.

Most people will do a double-take at that sentence.

Typically, the title of ‘first car’ is associated with the gas powered Ford Model-T. In fact, the Model-T only mainstreamed the automobile because its assembly process made it affordable in 1908. Seventy years earlier, a few electric cars were being developed and by the latter half of the 19th century those vehicles had the practicality to carry people and their belongings over sizable distances. Surprisingly, urban areas like New York City were dotted with EV chargers even back in the early 1900s. Electric cars eventually lost out because of limited range and speed—battery technology simply wasn’t ready. They temporarily faded from memory, and gas powered cars like Mustangs, Beetles, and Jeeps became the most iconic vehicles.

But for the Model-T to catch on and spawn a lineage that turned into a nearly $2 trillion industry, it had one major hurdle to overcome: when it came out, nobody knew how to drive.

Ford addressed this by having salespeople teach customers how to drive as a part of the sales pitch. Weak reliability and poor safety plagued the early automobiles, but gradually, infrastructure departed from being horse-and-buggy oriented. Driveways, traffic lights, and America’s interstate system emerged to support the first automobile revolution.

The intuitive nature of user interfaces in a traditional car also supported the automobile boom. Drivers understood that steering wheel input moved the front wheels, pointing the car in the corresponding direction, and control surfaces like mirrors and indicators were placed logically. The interface was learned in driver’s education classes and people understood it, so we trusted our cars. This trust and fidelity with vehicles made them catch on to the point where there is now one for every seven people on Earth.

With another automotive revolution on its way, everything we know about cars will change again. For autonomous vehicles (AVs) to catch on, they must have a dialogue with the user just like the conventional automobile does. Effective communication like Ford’s can help new technology gain traction and in the new auto industry dealers have the opportunity to educate. But some machine-user efficacy falls upon the product itself. By nature, AVs must show their users that they are working; they will be most useful for those farthest from understanding them, like children and the elderly. The user need not understand that LiDAR and a suite of cameras scan the vehicle’s surroundings to create a point cloud, which an algorithm interprets to make a decision about the car’s movement.

However, vehicles can still build trust and efficacy by empowering the user with agency throughout the drive, which is especially important in partially automated vehicles where man and machine walk a tight rope of who is in control.

Partial automation has inherent challenges in human-machine-integration because self-driving systems rely on deep learning, meaning their guidance algorithms learn how to behave from massive amounts of training data. But robots have trouble getting creative—specifically, they struggle to make decisions in unfamiliar edge cases that humans could understand. Success means making the car’s AI and driver’s mind work in concert creating a goldilocks level of attention paid and relaxation, so users can enjoy their time in the car safely.

Given the challenges with autonomy, HARMAN’s systems avoid tackling automated driving and instead serve to assist and empower the driver. The distinction is key because individual features can be passive—just a warning, or active by actually braking or steering. Passive features assist the driver, while active features mean the driver relinquishes control. In cars on the market today, HARMAN offers the ADAS suite, which assists the human driver by preventing action like changing into an occupied lane or following another car too closely. Surround View to monitors blind spots, and Forward-Facing Cameras to detect possible collisions and monitor the car’s position in its lane. Information from sensors is relayed to the driver usually through the infotainment system or warning lights on the mirrors or HUD.

In the future, as more systems covering more functions advance, people can expect to still drive their cars, but get buzzed at if they glance at their phones or are not driving straight in their lanes. Assistance systems dependent on computer vision will become more responsive and accurate once they have better analytics from a larger data pool. New features include Percentage of Eye Closer (PERCLOS)—a technique that cars use to analyze eye states and measure your drowsiness level. It’s application: future cars can make sure the driver is not texting or falling asleep, it’s like a blind spot monitor for your brain activity. Drivers also may expect a wider variety of sensors that can detect wildlife, pedestrians, or emergency vehicles. Assistance means drivers are in control, empowered with agency, and aids will help prevent disaster until fully effective autonomy arrives.

In the past five years, the industry has returned to electric propulsion. Consumers have been hesitant to adopt new powertrains because they expect to only refuel—or recharge rather—roughly every 300 miles just like their familiar gas powered cars. Expertise HARMAN gains from assistance systems in vehicles today will help automakers build AVs that converse with people like Model-T did—unlocking autonomy’s power to make efficiency leaps greater than the gas to electric transition, safety breakthroughs greater than the advent of airbags and seatbelts, and mobility advancements on par with the advent of the motor vehicle itself.