AI News, Augmented and virtual reality will involve human senses in verifying the operations of information systems

Augmented and virtual reality will involve human senses in verifying the operations of information systems

In the future, machines and AI systems will have a deeper understanding of the actions of their human users.

Even now, AI is able to generate an image of what a human is watching on the screen just by recording brain activity or deduce the emotions of people from microexpressions taken from their faces.

In the Human Verifiable Computing project, VTT used augmented and virtual reality to develop solutions for building trust between people and systems and facilitating the verification of information security.

Augmented reality was also utilized to give multisensory feedback by showing visual instructions to a maintenance worker who turns a valve and receives an error message if the valve is operated incorrectly.

Augmented reality

Augmented Reality (AR) is an interactive experience of a real-world environment whose elements are 'augmented' by computer-generated perceptual information, sometimes across multiple sensory modalities, including visual, auditory, haptic, somatosensory, and olfactory.[1] The overlaid sensory information can be constructive (i.e.

masking of the natural environment) and is seamlessly interwoven with the physical world such that it is perceived as an immersive aspect of the real environment.[2] In this way, augmented reality alters one’s ongoing perception of a real world environment, whereas virtual reality completely replaces the user's real world environment with a simulated one.[3][4] Augmented reality is related to two largely synonymous terms: mixed reality and computer-mediated reality.

The first functional AR systems that provided immersive mixed reality experiences for users were invented in the early 1990s, starting with the Virtual Fixtures system developed at the U.S. Air Force's Armstrong Laboratory in 1992.[2][5][6][7] The first commercial augmented reality experiences were used largely in the entertainment and gaming businesses, but now other industries are also getting interested about AR's possibilities for example in knowledge sharing, educating, managing the information flood and organizing distant meetings.

Modern mobile computing devices like smartphones and tablet computers contain these elements which often include a camera and MEMS sensors such as accelerometer, GPS, and solid state compass, making them suitable AR platforms.[18] Various technologies are used in augmented reality rendering, including optical projection systems, monitors, handheld devices, and display systems worn on the human body.

Modern HMDs often employ sensors for six degrees of freedom monitoring that allow the system to align virtual information to the physical world and adjust accordingly with the user's head movements.[19][20][21] HMDs can provide VR users with mobile and collaborative experiences.[22] Specific providers, such as uSens and Gestigon, include gesture controls for full virtual immersion.[23][24] In January 2015, Meta launched a project led by Horizons Ventures, Tim Draper, Alexis Ohanian, BOE Optoelectronics and Garry Tan.[25][26][27] On February 17, 2016, Meta announced their second-generation product at TED, Meta 2.

The Meta 2 head-mounted display headset uses a sensory array for hand interactions and positional tracking, visual field view of 90 degrees (diagonal), and resolution display of 2560 x 1440 (20 pixels per degree), which is considered the largest field of view (FOV) currently available.[28][29][30][31] AR displays can be rendered on devices resembling eyeglasses.

however, practically speaking, augmented reality is expected to include registration and tracking between the superimposed perceptions, sensations, information, data, and images and some portion of the real world.[38] CrowdOptic, an existing app for smartphones, applies algorithms and triangulation techniques to photo metadata including GPS position, compass heading, and a time stamp to arrive at a relative significance value for photo objects.[39] CrowdOptic technology can be used by Google Glass users to learn where to look at a given point in time.[40] A

The first contact lens display was reported in 1999,[44] then 11 years later in 2010-2011.[45][46][47][48] Another version of contact lenses, in development for the U.S. military, is designed to function with AR spectacles, allowing soldiers to focus on close-to-the-eye AR images on the spectacles and distant real world objects at the same time.[49][50] The futuristic short film Sight[51] features contact lens-like augmented reality devices.[52][53] Many scientists have been working on contact lenses capable of many different technological feats.

The disadvantages are the physical constraints of the user having to hold the handheld device out in front of them at all times, as well as the distorting effect of classically wide-angled mobile phone cameras when compared to the real world as viewed through the eye.[62] The issues arising from the user having to hold the handheld device (manipulability) and perceiving the visualisation correctly (comprehensibility) have been summarised into the HARUS usability questionnaire.[63] Games such as Pokémon Go and Ingress utilize an Image Linked Map (ILM) interface, where approved geotagged locations appear on a stylized map for the user to interact with.[64] Spatial augmented reality (SAR) augments real-world objects and scenes without the use of special displays such as monitors, head-mounted displays or hand-held devices.

Although there are a plethora of real-time multimedia transport protocols, there is a need for support from network infrastructure as well.[71] Techniques include speech recognition systems that translate a user's spoken words into computer instructions, and gesture recognition systems that interpret a user's body movements by visual detection or from sensors embedded in a peripheral device such as a wand, stylus, pointer, glove or other body wear.[72][73][74][75] Products which are trying to serve as a controller of AR headsets include Wave by Seebright Inc.

With the improvement of technology and computers, augmented reality is going to have a drastic change on our perspective of the real world.[76] According to Time Magazine, in about 15–20 years it is predicted that Augmented reality and virtual reality are going to become the primary use for computer interactions.[77] Computers are improving at a very fast rate, which means that we are figuring out new ways to improve other technology.

Mathematical methods used in the second stage include projective (epipolar) geometry, geometric algebra, rotation representation with exponential map, kalman and particle filters, nonlinear optimization, robust statistics.[citation needed] Augmented Reality Markup Language (ARML) is a data standard developed within the Open Geospatial Consortium (OGC),[85] which consists of XML grammar to describe the location and appearance of virtual objects in the scene, as well as ECMAScript bindings to allow dynamic access to properties of virtual objects.

Environmental elements such as lighting, and sound can prevent the sensor of AR devices from detecting necessary data and ruin the immersion of the end-user.[99] Another aspect of context design involves the design of the system’s functionality and its ability to accommodate for user preferences.[100][101] While accessibility tools are common in basic application design, some consideration should be made when designing time-limited prompts (to prevent unintentional operations), audio cues and overall engagement time.

A common technique to improve usability for augmented reality applications is by discovering the frequently accessed areas in the device’s touch display and design the application to match those areas of control.[102] It is also important to structure the user journey maps and the flow of information presented which reduce the system’s overall cognitive load and greatly improves the learning curve of the application.[103] In interaction design, it is important for developers to utilize augmented reality technology that complement the system’s function or purpose.[104] For instance, the utilization of exciting AR filters and the design of the unique sharing platform in Snapchat enables users to better the user’s social interactions.

and animated media imagery such as images and videos which are mostly traditional 2D media rendered in a new context for augmented reality.[98] When virtual objects are projected onto a real environment, it is challenging for augmented reality application designers to ensure a perfectly seamless integration with relative to the real world environment, especially with 2D objects.

By augmenting archaeological features onto the modern landscape, AR allows archaeologists to formulate possible site configurations from extant structures.[110] Computer generated models of ruins, buildings, landscapes or even ancient people have been recycled into early archaeological AR applications.[111][112][113] For example, implementing a system like, 'VITA (Visual Interaction Tool for Archaeology)' will allow users to imagine and investigate instant excavation results without leaving their home.

Architecture sight-seeing can be enhanced with AR applications, allowing users viewing a building's exterior to virtually see through its walls, viewing its interior objects and layout.[115][116][117] With the continual improvements to GPS accuracy, businesses are able to use augmented reality to visualize georeferenced models of construction sites, underground structures, cables and pipes using mobile devices.[118] Augmented reality is applied to present new projects, to solve on-site construction challenges, and to enhance promotional materials.[119] Examples include the Daqri Smart Helmet, an Android-powered hard hat used to create augmented reality for the industrial worker, including visual instructions, real-time alerts, and 3D mapping.

Computer-generated simulations of historical events allow students to explore and learning details of each significant area of the event site.[142] In higher education, Construct3D, a Studierstube system, allows students to learn mechanical engineering concepts, math or geometry.[143] Chemistry AR apps allow students to visualize and interact with the spatial structure of a molecule using a marker object held in the hand.[144] Anatomy students can visualize different systems of the human body in three dimensions.[145] Augmented reality technology enhances remote collaboration, allowing students and instructors in different locales to interact by sharing a common virtual learning environment populated by virtual objects and learning materials.[146] Primary school children learn easily from interactive experiences.

Apps that leverage augmented reality to aid learning included SkyView for studying astronomy,[147] AR Circuits for building simple electric circuits,[148] and SketchAr for drawing.[149] AR would also be a way for parents and teachers to achieve their goals for modern education, which might include providing a more individualized and flexible learning, making closer connections between what is taught at school and the real world, and helping students to become more engaged in their own learning.

Companies and platforms like Niantic and Proxy42 emerged as major augmented reality gaming creators.[158][159] Niantic is notable for releasing the record-breaking game Pokémon Go.[160] Disney has partnered with Lenovo to create the augmented reality game Star Wars: Jedi Challenges that works with a Lenovo Mirage AR headset, a tracking sensor and a Lightsaber controller, scheduled to launch in December 2017.[161] AR allows industrial designers to experience a product's design and operation before completion.

It has also been used to compare digital mock-ups with physical mock-ups for finding discrepancies between them.[163][164] Since 2005, a device called a near-infrared vein finder that films subcutaneous veins, processes and projects the image of the veins onto the skin has been used to locate veins.[165][166] AR provides surgeons with patient monitoring data in the style of a fighter pilot's heads-up display, and allows patient imaging records, including functional videos, to be accessed and overlaid.

Examples include a virtual X-ray view based on prior tomography or on real-time images from ultrasound and confocal microscopy probes,[167] visualizing the position of a tumor in the video of an endoscope,[168] or radiation exposure risks from X-ray imaging devices.[169][170] AR can enhance viewing a fetus inside a mother's womb.[171] Siemens, Karl Storz and IRCAD have developed a system for laparoscopic liver surgery that uses AR to view sub-surface tumors and vessels.[172] AR has been used for cockroach phobia treatment.[173] Patients wearing augmented reality glasses can be reminded to take medications.[174] Virtual reality has been seen promising in the medical field since the 90's.[175] Augmented reality can be very helpful in the medical field.

An adaptive augmented schedule in which students were shown the augmentation only when they departed from the flight path proved to be a more effective training intervention than a constant schedule.[181][182] Flight students taught to land in the simulator with the adaptive augmentation learned to land a light aircraft more quickly than students with the same amount of landing training in the simulator but with constant augmentation or without any augmentation.[181] An interesting early application of AR occurred when Rockwell International created video map overlays of satellite and orbital debris tracks to aid in space observations at Air Force Maui Optical System.

Virtual maps and 360° view camera imaging can also be rendered to aid a soldier's navigation and battlefield perspective, and this can be transmitted to military leaders at a remote command center.[185] The NASA X-38 was flown using a Hybrid Synthetic Vision system that overlaid map data on video to provide enhanced navigation for the spacecraft during flight tests from 1998 to 2002.

Information can be displayed on an automobile's windshield indicating destination directions and meter, weather, terrain, road conditions and traffic information as well as alerts to potential hazards in their path.[188][189][190] Aboard maritime vessels, AR can allow bridge watch-standers to continuously monitor important information such as a ship's heading and speed while moving throughout the bridge or performing other tasks.[191] Augmented reality may have a good impact on work collaboration as people may be inclined to interact more actively with their learning environment.

tool for 3D music creation in clubs that, in addition to regular sound mixing features, allows the DJ to play dozens of sound samples, placed anywhere in 3D space, has been conceptualized.[220] Leeds College of Music teams have developed an AR app that can be used with Audient desks and allow students to use their smartphone or tablet to put layers of information or interactivity on top of an Audient mixing desk.[221] ARmony is a software package that makes use of augmented reality to help people to learn an instrument.[222] In a proof-of-concept project Ian Sterling, interaction design student at California College of the Arts, and software engineer Swaroop Pal demonstrated a HoloLens app whose primary purpose is to provide a 3D spatial UI for cross-platform devices — the Android Music Player app and Arduino-controlled Fan and Light — and also allow interaction using gaze and gesture control.[223][224][225][226] AR Mixer is an app that allows one to select and mix between songs by manipulating objects – such as changing the orientation of a bottle or can.[227] In a video Uriel Yehezkel, demonstrates using the Leap Motion controller and GECO MIDI to control Ableton Live with hand gestures and states that by this method he was able to control more than 10 parameters simultaneously with both hands and take full control over the construction of the song, emotion and energy.[228][229][better source needed] A

system using explicit gestures and implicit dance moves to control the visual augmentations of a live music performance that enable more dynamic and spontaneous performances and—in combination with indirect augmented reality—leading to a more intense interaction between artist and audience has been suggested.[231] Research by members of the CRIStAL at the University of Lille makes use of augmented reality in order to enrich musical performance.

The ControllAR project allows musicians to augment their MIDI control surfaces with the remixed graphical user interfaces of music software.[232] The Rouages project proposes to augment digital musical instruments in order to reveal their mechanisms to the audience and thus improve the perceived liveness.[233] Reflets is a novel augmented reality display dedicated to musical performances where the audience acts as a 3D display by revealing virtual content on stage, which can also be used for 3D musical interaction and collaboration.[234] Augmented reality is becoming more frequently used for online advertising.

Human–computer interaction

Moran in their seminal 1983 book, The Psychology of Human–Computer Interaction, although the authors first used the term in 1980[1] and the first known use was in 1975.[2] The term connotes that, unlike other tools with only limited uses (such as a hammer, useful for driving nails but not much else), a computer has many uses and this takes place as an open-ended dialog between the user and the computer.

Desktop applications, internet browsers, handheld computers, and computer kiosks make use of the prevalent graphical user interfaces (GUI) of today.[5] Voice user interfaces (VUI) are used for speech recognition and synthesising systems, and the emerging multi-modal and Graphical user interfaces (GUI) allow humans to engage with embodied character agents in a way that cannot be achieved with other interface paradigms.

Instead of designing regular interfaces, the different research branches have had a different focus on the concepts of multimodality rather than unimodality, intelligent adaptive interfaces rather than command/action based ones, and finally active rather than passive interfaces.[citation needed] For instance, more recently, sensors like video cameras and eye trackers can be used to feed physiological information of humans back to computer systems.[6][7] Such information can be used by computers to dynamically adapt content of interfaces.

The Association for Computing Machinery (ACM) defines human–computer interaction as 'a discipline concerned with the design, evaluation and implementation of interactive computing systems for human use and with the study of major phenomena surrounding them'.[5] An important facet of HCI is the securing of user satisfaction (or simply End User Computing Satisfaction).

On the human side, communication theory, graphic and industrial design disciplines, linguistics, social sciences, cognitive psychology, social psychology, and human factors such as computer user satisfaction are relevant.

A classic example of this is the Three Mile Island accident, a nuclear meltdown accident, where investigations concluded that the design of the human-machine interface was at least partly responsible for the disaster.[8][9][10] Similarly, accidents in aviation have resulted from manufacturers' decisions to use non-standard flight instrument or throttle quadrant layouts: even though the new designs were proposed to be superior in basic human-machine interaction, pilots had already ingrained the 'standard' layout and thus the conceptually good idea actually had undesirable results.

In doing so, much of the research in the field seeks to improve human–computer interaction by improving the usability of computer interfaces.[11] How usability is to be precisely understood, how it relates to other social and cultural values and when it is, and when it may not be a desirable property of computer interfaces is increasingly debated.[12][13] Much of the research in the field of human–computer interaction takes an interest in: Visions of what researchers in the field seek to achieve vary.

Researchers in HCI are interested in developing new design methodologies, experimenting with new devices, prototyping new software and hardware systems, exploring new interaction paradigms, and developing models and theories of interaction.

In the study of personal information management (PIM), human interactions with the computer are placed in a larger informational context – people may work with many forms of information, some computer-based, many not (e.g., whiteboards, notebooks, sticky notes, refrigerator magnets) in order to understand and effect desired changes in their world.

When evaluating a current user interface, or designing a new user interface, it is important to keep in mind the following experimental design principles: Repeat the iterative design process until a sensible, user-friendly interface is created.[16] A

Early methodologies, for example, treated users' cognitive processes as predictable and quantifiable and encouraged design practitioners to look to cognitive science results in areas such as memory and attention when designing user interfaces.

Modern models tend to focus on a constant feedback and conversation between users, designers, and engineers and push for technical systems to be wrapped around the types of experiences users want to have, rather than wrapping user experience around a completed system.

For example, one study found out that people expected a computer with a man's name to cost more than a machine with a woman's name.[22] Other research finds that individuals perceive their interactions with computers more positively than humans, despite behaving the same way towards these machines.[23] In human and computer interactions, there usually exists a semantic gap between human and computer's understandings towards mutual behaviors.

Ontology (information science), as a formal representation of domain-specific knowledge, can be used to address this problem, through solving the semantic ambiguities between the two parties.[24] Traditionally, as explained in a journal article discussing user modeling and user-adapted interaction, computer use was modeled as a human–computer dyad in which the two were connected by a narrow explicit communication channel, such as text-based terminals.

These forces include: The future for HCI, based on current promising research, is expected[26] to include the following characteristics: One of the main conferences for new research in human–computer interaction is the annually held Association for Computing Machinery's (ACM) Conference on Human Factors in Computing Systems, usually referred to by its short name CHI (pronounced kai, or khai).

How will people interact with augmented reality?

Advancements in gesture tracking, motion tracking, eye tracking, and other technologies are laying the groundwork for natural interaction methods that will be essential for the success of augmented reality.

In a typical AR environment—working in an industrial scenario or interacting with a video game, for example—it simply isn’t possible to type, tap, click, and swipe.

The advent of computers in the 1940s and 1950s, with their punch cards and manual switches, eventually led to the first alphanumeric computer keyboards in the 1960s.

Interaction methods based on speech, gestures, motion, and eye movement are more natural for humans, as these are the methods they routinely use to interact with the physical world and each other.

At the same time, interaction methods move away from being deterministic to probabilistic, in that the intent of the user is interpreted from the action in a probabilistic manner.

Workplaces can be dirty, noisy, bright, or dark and can present other challenges not usually encountered in a workspace where a desktop or laptop computer typically has been used.

For example, if a user is wearing a head-up display and is disassembling a piece of equipment or driving a car, conventional typing is almost always impossible.

These technologies will be the basis for intuitive interfaces that, like the mouse, keyboard, and touch screen today, enable the user to select the best interface for the task.

Smartphones and tablets have long incorporated the fundamental building blocks of motion tracking—the gyroscope, compass, and GPS—to determine which way the device is oriented.

Simple AR apps are widely available for these devices, letting users overlay restaurant locations, real estate prices, or the location of their parked car atop a live image of the real world.

Recalls CEO Michael Buckwald, “David realized quickly that creating something really simple like a coffee cup took longer on a computer than it would take a five-year-old to create the same thing out of clay.” The goal was to make computer interactions more natural by using gestures that people already intuitively use and understand.

While human hands and fingers comprise a relatively small number of moving parts, tracking them all is a surprisingly difficult task, as it entails accurately tracking 10 fingers through a camera.

Another technology that offers great promise as an AR input system is eye tracking, which monitors the motion of the eye to determine where the user is looking.

That capability is particularly useful in situations where a voice command isn’t realistic or a gesture can’t be made (such as when the user is carrying a load with both hands).

“If someone is reading an e-book, for instance, the book will know when they want to turn the page” by following their eye movements, sentence by sentence.

The Eye Tribe’s technology uses infrared cameras to track eyes, which Johansen says is a far more accurate way to track the movements of the pupil than through cameras that capture visible light.

Visible light cameras, Johansen says, can capture crude eye movements—whether a person is looking up, down, left, or right, for example—but those cameras don’t have the accuracy required to capture the delicate pupil movements involved in a high-grade interface.

A few additional technologies that present interesting alternative interface methodologies are: Augmented reality presents myriad challenges for how humans interact with it, requiring technology to stitch together various realities into a cohesive and seamless experience.

For example, speech won’t work well in a loud factory, and many tracking systems—gesture or eye—currently have problems working in bright environments, such as outside on a sunny day.

In the future, those components likely will come together under a common interaction fabric that developers take advantage of and that users use to intuitively interact with physical items and digital items in the physical spaces where they work.

The thrilling potential of SixthSense technology | Pranav Mistry

At TEDIndia, Pranav Mistry demos several tools that help the physical world interact with the world of data -- including a deep look at his ..

The incredible inventions of intuitive AI | Maurice Conti

What do you get when you give a design tool a digital nervous system? Computers that improve our ability to think and imagine, and robotic systems that come ...

Providing a Sense of Touch through a Brain-Machine Interface

A DARPA-funded research team has demonstrated for the first time in a human a technology that allows an individual to experience the sensation of touch ...

Brain-Computer Interface Demonstration

This video is a demostration of BCI developed by Neuroimaging Methods Group of Centre for Cognition and Decision Making at Higher School of Economics, ...

Democratic sensors, The Indian way | Anirudh Sharma | TEDxGateway

This interactive haptic footwear, started by Anirudh Sharma, can aid the visually impaired using GPS navigation. The shoe not only tells the user which way to go ...

Sensory Augmentation Devices

For more like this subscribe to the Open University channel Free learning from The Open ..

AlterEgo: Interfacing with devices through silent speech

AlterEgo is a wearable system that allows a user to silently converse with a computing device without any voice or discernible movements — thereby enabling ...

F8 - Building 8 (Mind Reading Technology) - Regina Dugan

Facebook F8 - Developer Conference At F8 2017, Facebook revealed it has a team of 60 engineers working on building a brain-computer interface that will let ...

6th sense tech Demonstration

Sixth-Sense-Technology at TED Dmonstation by Pranav mistry . the tech now used in the computers are revelaed.....

A Sci-Fi Short Film : "Sight" - by Sight Systems

Sight, a brilliant and disturbing short sci-fi film by Eran May-raz and Daniel Lazo, imagines a world in which Google Glass-inspired apps are everywhere. This is ...