What is the exact definition of a Hybrid Car?

Does a self-driving car meet the definition of consciousness?

  • Google's self driving car does a whole lot of things that seem to meet reasonable definitions of consciousness (or "self-awareness," if you prefer that term). It seems to be aware of itself and its environment, and the relationship between the two. It can make sophisticated, prioritized decisions, it has long term and short-term goals, it can learn based on which previous decisions were followed by successful outcomes, and so on. Is there any objective definition of "consciousness" that doesn't apply to such an entity? Background:  The Google car has a camera and other sensors that capture data about its environment, continually updating an internal 3d model of that environment.  It makes predictions about the near-term behavior of other things in the environment, such as pedestrians and cars. It keeps track of its own position in that environment.  It has a concept of what a vehicle is, it knows that it is a vehicle, and it recognizes other objects as being vehicles.  But it also knows that it is a very special case of a vehicle, in that it is the only vehicle that is itself.  While it can try to predict the behavior of other vehicles, it does not try to predict its own behavior since it actually controls that behavior. That is, it appropriately differentiates itself from all other things -- it has a meaningful and useful concept of "self." Update: various people have indicated that to be conscious, it must be "aware that it exists" (which I've yet to be convinced is any different than "aware of itself," or how we'd know if it was), that it must be capable of natural language processing and synthesis (which other software is, but probably not the Google car).  Also it was mentioned that such an entity is too specialized and single-purpose to be considered conscious (but I'm not sure I understand why that matters).  Finally it was mentioned that computers, unlike humans, must use predetermined rules and strategies to accomplish their goals.  This latter one, to me, represents a highly simplistic and black and white view of computer software: both humans and computer software can use overarching rules to create derived rules which then can create further derived rules and so on.

  • Answer:

    No, I would not characterize Google's self driving cars as conscious. These cars are not "aware" of anything. They simply take input from sensors, run a predetermined set of instructions on the resulting data, and use the output of those instructions to control the vehicle. They don't learn, and they don't adapt outside of their own programming. The vehicle has no short term or long term goals of it's own. It only seems to have goals because those who programmed it did so with goals of their own. Similarly, the car itself has no concept of "self". Those who programmed it did have that concept in mind when they determined how the car would operate, but the car itself is not conscious. To an outside observer Google's self-driving cars may seem self-aware, but if you know how they work you can clearly see that they do not meet that definition.

Andrew Meyer at Quora Visit the source

Was this solution helpful to you?

Other answers

Consciousness isn't a two valued quality, I would peg the consciousness as mostly far superior to an insect's but short of a rodent's in some respects but not in others.      When more autonomous cars are on the road and they can communicate at a higher bit rate than afforded by turn signal, and the cars have deeper internal monitoring of vehicle state and longer term health issues then the question can be re-addressed. The highly structured nature of driving in the U.S. seems like a crutch for autonomy, something that could effectively drive in a heavily populated environment with no real traffic laws might be more comparable to ourselves and animals in nature.  A population of taxi cabs or transport trucks with minimal central control but responding to and competing for opportunities as they arise and also self directing their own stops for maintenance would also be more analogous to natural systems with degrees of consciousness. It also isn't clear how capable the car is of learning as it drives or if external systems supply some of that ability.  Discrepancies between the stored map and the live discovered one are surely internalized, but what about traffic patterns, behaviours of specific other drivers on the same road, and overall driving performance?

Lucas Walter

No. It's goals are pre-determined by its programmers, and any subsequent goals generated by itself are a subset of or flow from the originals. It cannot escape its original input. 's analogy of the gravel truck is an excellent example of this. Even if Google's car did have a windscreen pressure sensor, it would never learn to avoid gravel trucks unless part of its original goals were to protect itself. It can never self-identify a new goal. A conscious entity can.

Tom Allen

It certainly has a level of consciousness, about that of a cockroach. Is it self-aware? No, it has not reached that level of consciousness yet. If it encountered a gravel truck that was occasionally droping rocks that then dented its hood and cracked its windshield it would be incapable of perceiving it and even if it did it would not know how to avoid it until a program was written that instructed it to avoid following gravel trucks. It cannot learn from the comparative evaluation of experience.

Terrence Reed

Consciousness is a very debatable term in science and as well as religion. What defines human consciousness is very subjective to human experience. It just might be the fact that we all think we are conscious, but in fact its just a part of one of the activities that we do like eating. Well there are several theories in science that try to explain what is consciousness. One of them is Global Workspace theory (http://en.wikipedia.org/wiki/Global_Workspace_Theory) or there is a cognitive architecture named LIDA (http://en.wikipedia.org/wiki/LIDA_(cognitive_architecture)) which makes use of consciousness). Well the central point many of these theories make is that, if you can read contents of your mind, i.e. if a program can read the values of its variables at any time and other programs also can do that, then these programs are said to be conscious as per the definition of consciousness. Well do they "experience" the same "feeling" about consciousness? thts debatable. But for all such program in the system they do have "same" notion of consciousness w.r.t. each other as we all humans do. If you are more interested in this topic, and find out some real work going on, check Cognitive Architectures and consciousness in Google Search.

Pranav Mirajkar

Just Added Q & A:

Find solution

For every problem there is a solution! Proved by Solucija.

  • Got an issue and looking for advice?

  • Ask Solucija to search every corner of the Web for help.

  • Get workable solutions and helpful tips in a moment.

Just ask Solucija about an issue you face and immediately get a list of ready solutions, answers and tips from other Internet users. We always provide the most suitable and complete answer to your question at the top, along with a few good alternatives below.