Flyer

Journal of Neurology and Neuroscience

  • ISSN: 2171-6625
  • Journal h-index: 18
  • Journal CiteScore: 4.35
  • Journal Impact Factor: 3.75
  • Average acceptance to publication time (5-7 days)
  • Average article processing time (30-45 days) Less than 5 volumes 30 days
    8 - 9 volumes 40 days
    10 and more volumes 45 days
Awards Nomination 20+ Million Readerbase
Indexed In
  • Open J Gate
  • Genamics JournalSeek
  • The Global Impact Factor (GIF)
  • China National Knowledge Infrastructure (CNKI)
  • Directory of Research Journal Indexing (DRJI)
  • OCLC- WorldCat
  • Proquest Summons
  • Scientific Journal Impact Factor (SJIF)
  • Euro Pub
  • Google Scholar
  • Secret Search Engine Labs
Share This Page

Perspective Article - (2017) Volume 0, Issue 0

Us and Them: Robotics and Cognitive Neuroscience

Saima Khursheed Beigh* and Humera Shafi

Department of Psychology, University of Kashmir, Srinagar, Jammu & Kashmir, India

Corresponding Author:

Saima Khursheed Beigh
Department of Psychology, University of Kashmir, Srinagar, Jammu & Kashmir, India.
Tel: +919797700530
E-mail: saima.beigh11@gmail.com

Received: March 06, 2017; Accepted: October 11, 2017; Published: October 18, 2017

Citation: Beigh SK, Shafi H (2017) Us and Them: Robotics and Cognitive Neuroscience. J Neurol Neurosci. Vol.8 No.S4:230 doi:10.21767/2171-6625.1000230

Visit for more related articles at Journal of Neurology and Neuroscience

Abstract

With the changing times Technology has evolved in ways human mind couldn't fathom 200 years ago, with the development of computers to robots the ride hasn't been sweet. Artificial Intelligence is a multi-disciplinary field that is aided by many fields. The early years of AI were hopeful and both a symbolic approach and a sub symbolic approach allowed AI to achieve triumphs but like any other system, AI has had its share of darkness and hopelessness despite of all the problems it has emerged as the growing field with vast scope. In this paper, we analyze the different attempts to explain the process of developing a sophisticated robot and the problems that underlie it.

Keywords

Robots; Neuroscience; Cognitive psychology

Introduction

“I think, therefore I am” Rene Descartes who wrote this famous saying struggled long and hard to distinguish the consciousness of human beings from the workings of mere machines. He wouldn’t have remotely thought that Machines too would start thinking, the invention of computer sometimes referred as electronic brain. No supercomputer has yet achieved sophistication like Ultron in the movie Avengers, who plotted malevolent plan to save mankind from extinction. These remarkable machines can mimic and in some cases been able to surpass their human creators at testing hypothesis and remembering facts. They make plans, hold limited conversations, play chess and even compose music.

Artificial intelligence (AI) encompasses a huge interdisciplinary arena, which has been by aided many fields such as computer science, psychology, philosophy, neuroscience, mathematics, mechanical engineering, linguistics & cybernetics. AI main purpose is the design and production of automated systems (computer programs and machines) that execute tasks which require intelligent behavior (i.e., jobs that need alteration to composite and shifting situations).

Scientists in the field of artificial intelligence design the sets of instructions (programs) that enable machines to do all these things. Until recently, most workers in artificial intelligence did not care whether the machine actually used human strategies as long as it behaved as it was supposed to. The success of computer’s in accomplishing tasks often put humans to ignominy. As early as 1950’s when computers were still at the Neanderthal phase in their evolution, they could use logical principles to find alternate proofs of theorems in symbolic logic [1]. The machines were assiduous and barring a power outage, would continue to operate with consistent competence day and night, even without coffee and lunch breaks. They were fast: calculating was done in millionths or trillionths of seconds, making the human mind look like molasses.

A robot, in order to act astutely, must be skillful to reason from evidences which its sensors spot to assumptions which govern its activities. From its commencement, the cognitive revolution was channeled by an allegory: the mind is like a computer. We are a set of software programs running on 3 pounds of neural hardware. And cognitive psychologists were interested in the software. Computer scientists are now designing machines called neural networks that try to emulate the brain’s massive lattice of compactly associated neurons. In this technique called Connectionism simple processing units are linked up to one another in a web like system, much as neurons in the brain are, sharing information and working in parallel. Like the human brain, neural networks do not always find the very best solution to a problem, but usually they do find a good solution to a problem. They also have the potential to learn from experience by adjusting the strengths of their “neural” connections in response to new information.

In 1956, the computer scientist John McCarthy promulgated the term "Artificial Intelligence" (AI) to describe the study of intelligence by applying its essential features on a computer. Instantiating an intelligent system using man-made hardware, relatively than our own "biological hardware" of cells and tissues, would show ultimate understanding, and have obvious practical applications in the creation of intelligent devices or even robots.

The early years of AI were hopeful and occupied with feats. Both a symbolic approach (i.e., an approach that uses symbols and rules) and a sub symbolic approach (i.e., an approach that does not use rules but learns by itself) to AI concurred with many triumphs. In the symbolic approach, some of the initial feats include the presentation of the General Problem Solver by Newell, Shaw, and Simon in 1963, a program designed to emulate human problem- solving protocols, and John McCarthy’s LISP in 1958, which became one of the prime languages in AI. Some of the early successes in sub-symbolic AI include the development of the Adelines by Widrow and Hoff [2], which enhanced Hebbs’s learning methods, and the perceptron, by Frank Rosenblatt, which was the precursor of the artificial neural networks we know today.

The early years of AI were hopeful and occupied with feats. Both a symbolic approach (i.e., an approach that uses symbols and rules) and a sub symbolic approach (i.e., an approach that does not use rules but learns by itself) to AI concurred with many triumphs. In the symbolic approach, some of the initial feats include the presentation of the General Problem Solver by Newell, Shaw, and Simon in 1963, a program designed to emulate human problem- solving protocols, and John McCarthy’s LISP in 1958, which became one of the prime languages in AI. Some of the early successes in sub-symbolic AI include the development of the Adelines by Widrow and Hoff [2], which enhanced Hebbs’s learning methods, and the perceptron, by Frank Rosenblatt, which was the precursor of the artificial neural networks we know today.

Discussion

During the dawn of Robotics, scientists saw that there was colossal latitude for developing human looking robots, but this exaggeratedly positive vision of creating rational machines was bitterly rumpled. By the end of the 1960s, snags arose as the AI promises from the decade before fell short and started to be considered “puff.” Research in sub-symbolic AI was mostly demoted after Minsky and Papert formally proved in 1969 that perceptron’s (i.e., simple neural networks) were inconsistent in their representation mechanism because they could not represent the XOR (exclusive-OR) logical problem: a perceptron could not be trained to recognize situations in which either one or another set of inputs had to be present, but not both at the same time.

In the 1980s and 1990s, McClelland, Rumelhart, and the PDP Research Group [3] disseminated artificial neural networks and the connectionist movement, which had slouched since the late 1960s. In the connectionist approach, cognitive functions and behavior are perceived as growing processes from parallel, distributed processing activity of interconnected neural populations, with learning taking place through the adaptation of connections among the participating neurons. PDP attempts to be a general architecture and explain the mechanisms of perception, memory, language, and thought.

In its last instantiation, ACT-R [4,5] is presented as amalgam cognitive architecture. The symbolic and sub-symbolic representations work together to explain how people organize knowledge and produce intelligent behavior. ACT-R theory tries to evolve toward a system that can perform the full range of human cognitive tasks, capturing in great detail how we perceive, think about, and act on the world. Because of its general architecture, the theory is applicable to a wide variety of research disciplines, including perception and attention, learning and memory, problem solving and decision making, and language processing.

Artificial Intelligence is concentrated on Automated Theorem Proving where algorithms are applied to check whether something deductively follows from something else or not. AI has two chief devotions. One is to use the power of computers to amplify human thinking, just as we use motors to enhance human or horse power. The other is to use a computer's artificial intelligence to understand how humans think. With the innovation of fMRI and PET scan it’s possible to reconnoiter neural mechanisms that produce human cognition.

An Functional magnetic resonance imaging (fMRI) detects changes in blood flow to measure brain activity with fine degree of spatial resolution. It can detect changes so subtle that it's possible to differentiate between the activity patterns created when you think about turning left versus when you think about turning right. Till now there are many tools like Emotiv's EPOC headset that can detect specific patterns of brainwaves which can be used to send commands to a robot, but it’s a tedious process as you have to train your brain to create those brainwaves. whereas, fMRI, can read your thoughts directly, with a vaguely alarming degree of accuracy. The other big advantage is that you don't need any sort of implant or anything, just expensive machine. Scientists working on neural networks believe they can give computers humanlike ability to think, remember, and solve problems and with no qualm some of the simulations have been successful. Critics however remain skeptical because human intelligence arises from the experiences of human life. The routine events like trips to departmental stores require common-sense knowledge of the world, which living thing absorbs through the senses.

There are by this time different kinds of robots: factory automation systems that seam and bring together car engines; machines that places milk into bottles; devices that support and assist surgeons in operations; vehicles for planetary survey. These robots typically consist of one or two arms and a controller. these robots are controlled by robot-work-station controller. This controller is responsible for the monitoring of auxiliary sensor that detect the presence, distance, velocity, shape, weight, or other properties of objects. Robots may be equipped with vision systems, depending on the application for which they are used. Mostly, these robots are stationary and work is conveyed to them through robot carts called autonomous guided vehicles (AGV).

In a sense robots do exist but the dream of making them human like still is missing because of the fact that humans have reasoning capacity. Humans use reasoning to everything what they see, what they do, what they know and even what they don’t know.

Conclusion

For better or worse, human thought is inseparable from emotion, motives and the pursuit of pleasure and happiness, Human beings know they think, whereas computers, as far as anyone knows, lack consciousness. But then, perhaps they don’t need it because they have us.

Functional magnetic resonance imaging (fMRI) is a remarkable technology: It can be used to do everything from recording your dreams on video to teaching you new skills while you sleep. It's also good for controlling robots, and Israeli researchers have managed to get a robot to move around a room just by thinking about it.

20915

References

  1. Newell A, Simon HA (1972) Human problem solving. Englewood Cliffs, Prentice-Hall, NJ, USA. 104: 9.
  2. Widrow B, Hoff ME (1960) Adaptive switching circuits. In: IRE WESCON Convention Record 4: 96-104.
  3. McClelland JL, Rumelhart DE, The PDP Research Group (1986) Parallel distributed processing: Explorations in the microstructure of cognition MIT Press, Cambridge, MA, USA.
  4. Anderson JR, Bothell D, Byrne MD, Douglass S, Lebiere C, et al. (2004) An integrated theory of the mind. Psychological Review 111: 1036-1060.
  5. Anderson JR, Lebiere C (1998) The atomic components of thought. Erlbaum, Mahwah, NJ, USA.