Current systems for assisted and automated driving are operating in an Open-World context, which describes a bunch of unknown, unconsidered and untested scenarios. To keep on track with this complexity of infinite number of scenarios and cases, the software is getting more and more important. This leads us to a need for generalization – a software capability to handle also those scenarios, which are not completely and formally considered in the development. We need software that can learn. We need Artificial Intelligence since the classic software always relates to a deterministic number of cases.
Although AI showed great progress in many industry domains in recent years, the problems we are facing with this innovative technology are rare and incomparable with classic software. And more critically – we are missing field experience.
Generalization capability is (shall be) given after successful training process of an AI module and relates to an adequate decision making. At the moment we cannot understand/explain how a certain decision has been made. We are facing a challenge called “Interpretability”.
The current state of the art regarding AI vulnerability shows that just a minor change of input data (which will be processed by AI module) is sufficient to confuse an AI module entirely and to bring it to a faulty decision making. We are facing a challenge called “Robustness”.
Open-World context forces us to deal with scenarios, which are not ideal, but which we need to consider and to react on them properly. It will often happen that camera lenses are occluded to some extent. Or that a radar sensor is limited due to intensive snowing… There is no perfect sensor type of AD. Neither perfect algorithms. We are facing a challenge called “Performance Limitations”.
And for all these new challenges we need a broader consensus – in society, industry and science.