Increasingly autonomous vehicles are entering our daily lives for delivery of groceries, for security, for cleaning of roads, etc. Robust operation in everyday environments is a challenge with changes in weather, people behavior, disregarding traffic rules etc.
Over the last three years we have designed autonomous vehicles for mail delivery on the UC San Diego campus. The vehicle has to navigate a campus that is constantly under construction, there are 70,000+ people on campus and there is a need to operate in the presence of 5,000+ other vehicles. A full-stack system has been designed for autonomous operation. This includes methods for real-time mapping of campus, detection and tracking people, cars, skateboards, … Estimation of the intent of other road-users and dynamic planning of missions is needed to achieve robust autonomy.
In this presentation we will discuss how a mixture of sensors from lidar and radar to cameras, GPS, sonar and IMUs have been evaluated for robust performance. We will present both the system 1.0 that was used for a six month deployment on campus and also the latest system 2.0 which is currently deployed on campus for tasks such as mail delivery. We will discuss the strategies for early and late fusion, and the overall systems design and summarize major lessons learnt.
Jokingly, a human source is sometimes referred to as a “meat sensor” as opposed to a device made of metal, glass or other substances. In contrast to the data collected by devices, intelligence collected from human sources is not just the recording of physical phenomena which can be processed through sophisticated algorithms to attach meaning to those phenomena – HUMINT may also be composed of opinion, perception, speculation and hearsay. Even when a human is reporting on physical phenomena, the act of converting those observations into textual form in some human language requires preprocessing on the part of the reporting source. That preprocessing not only affects the “signal”, it also affects the way in which the information is further processed, whether via algorithm or by other humans. The preprocessing also varies from individual to individual, based upon a variety of factors such as language, experience, knowledge, etc. In this talk, we will look at some of the aspects of the various facets of HUMINT which need to be considered for fusion purposes and look at some possible strategies for dealing with them.
Deceptive, inaccurate, or misleading information can be spread intentionally (in an act of disinformation) or unintentionally (as misinformation). Being ill-informed is problematic for decision making either way, in most spheres of life, be it health, finances or politics. To aid human users with the identification of various kinds of problematic “fake” content, several methodologies have been developed in the fields Natural Language Processing (NLP)/Computational Linguistics, Machine Learning (ML), and Computer Science.
Five large families of such systems include automated deception detectors, clickbait detectors, satirical fake detectors, rumor debunkers, and computational fact-checking tools. While computational literature documents their advances, these systems’ existence is barely known outside of the experimental labs and their adoption is slow.
This talk exemplifies representative methodologies, their success rates, and limitations. Given the viral nature and the scale of the problem, adoption of some form of automated detection systems is inevitable. Yet, to avoid being disinformed, the final solution resides in the human mind: which sources to trust, what messages to believe, whose expertise to rely on. Rubin synthesizes evidence-based research from interpersonal, social, and cognitive psychology, computer-mediated communication, and information science to expose inherently human pragmatic challenges.
Rubin’s (2019) Mis- and Disinformation Triangle posits that three interacting causal factors enable the infodemic: Digital media users serve as suspectable hosts, prone to being deceived; various types of “fakes” bombard their hosts as virulent pathogens; and digital platforms are financially motivated to remain conducive environments to spread the infodemic.
Rubin proposes that simultaneous and sustained disruption of interactions between these factors can dampen the infodemic. Susceptible minds require more purposeful and vigorous training in the practical skills of digital literacy in the educational system. Toxic environments urgently need greater legislative oversight and regulation. Automated solutions can assist human users in detection at large scale, yet automation alone is insufficient as deterrence or prevention. Thus, the society should be more invested in deploying artificial intelligence (AI) in conjunction with our human intelligence (HI) to combat the problem of mis- and disinformation.