E-Learning Diploma Forum
Would you like to react to this message? Create an account in a few clicks or log in to continue.
Search
 
 

Display results as :
 


Rechercher Advanced Search

Latest topics
» miss my friends
About Artificial Intellegence EmptyMon Jan 02, 2012 9:49 pm by Rasha Badran

» "آبل تطلق تطلق "آي باد 2
About Artificial Intellegence EmptySun Apr 17, 2011 9:06 pm by Rasha Badran

» البرق بالتصوير البطيء
About Artificial Intellegence EmptySun Apr 17, 2011 9:02 pm by Rasha Badran

» كنت بحلم أسافر و أطير .... كنت بحلم أبعد بعيد ... بحبك يا مصر
About Artificial Intellegence EmptySun Apr 17, 2011 9:02 pm by Rasha Badran

» الخاتم العداد : خاتم يعد لك نقودك
About Artificial Intellegence EmptySun Apr 17, 2011 8:55 pm by Rasha Badran

» مبادئ إدارة الجودة الشاملة في التعليم
About Artificial Intellegence EmptySun Apr 17, 2011 8:48 pm by Rasha Badran

» That's amazing
About Artificial Intellegence EmptySat Feb 26, 2011 8:29 pm by TaMeR

» التعلم الالكتروني والتعلم المدمج
About Artificial Intellegence EmptySat Feb 26, 2011 8:21 pm by TaMeR

» IQ Arabic Test... أفضل اختبار ذكاء عربي... أدخل لكي تطمئن على عقلك
About Artificial Intellegence EmptySat Feb 26, 2011 8:19 pm by TaMeR

Navigation
 Portal
 Index
 Memberlist
 Profile
 FAQ
 Search

About Artificial Intellegence

+5
Mohamed AbdElnaby
Maboali
Walid M. Gadalla
Shimaa Ahmed Mostafa
Dr. M EL Zayat
9 posters

Go down

About Artificial Intellegence Empty About Artificial Intellegence

Post  Dr. M EL Zayat Thu Oct 28, 2010 11:01 am

We All learned in the first lecture about Artificial intelligence and its applications. We also learned that in general, Artificial intelligence (AI) refers to the artificial simulation of human brain function by a machine or a system.

The processes involved in human brain functions are endless and represent an almost impossible challenge to any systems developer to try to simulate some of these functions.

Because of this challenges of simulating human brain functions have been broke down into several sub-challenges to ease its accomplishment.


Search the Internet for Artificial intelligence challenges, read, think and analyze what you find. And then give us your understanding.

Further participations in related topics are warmly welcomed


Thanks,
Dr. Mohamed EL Zayat
Remember, I always expect the best from you

Dr. M EL Zayat
Admin

Posts : 32
Points : 40
Join date : 2010-10-19
Location : 33 Almesaha St, Aldokki - Giza - Egypt

https://melzayat.all-up.com

Back to top Go down

About Artificial Intellegence Empty Challenge Problems for Artificial Intelligence

Post  Shimaa Ahmed Mostafa Fri Oct 29, 2010 12:35 am

The AI is The branch of computer science concerned with making computers behave like humans. The term was coined in 1956 by John McCarthy at the Massachusetts Institute of Technology. Artificial intelligence includes
games playing: programming computers to play games such as chess and checkers
expert systems : programming computers to make decisions in real-life situations (for example, some expert systems help doctors diagnose diseases based on symptoms)
natural language : programming computers to understand natural human languages
neural networks : Systems that simulate intelligence by attempting to reproduce the types of physical connections that occur in animal brains
robotics : programming computers to see and hear and react to other sensory stimuli
CHALLENGES OF ARTIFICIAL INTELLIGENCE
Artificial Intelligence researchers and practitioners are claiming
that computers are in certain extent as humans. However their claims of replicating complex
animal and of making machines that not only think, but will have feelings and emotions as
well is perhaps too much. After all a computer, we must not forget, is just a machine that:[left]
The first and important problem of artificially made intelligent devices is that they
never have the understanding.

  • Computer or Artificial Intelligence can behave only according to the precise commands.
    creativity is another area where Artificial Intelligence is challenged. Although there is a possibility of
    programmed creativity of computers, however a computer does lack creativity in its original
    sense. “The computer is fantastically single minded.”

We have heard of machine that walks, talks, and moves etc. but so far never heard of a machine that
dreams.”

  • Machines have no free choice. They are preprogrammed. They cannot do other than what they have
    been programmed to do. They are dynamic yet single minded and purposive. They do not
    have the ability of creativity that humans have.


In my opinion:
Computers have the artificial intelligence but humans have the natural intelligence.
Here by artificial intelligence we mean programmed intelligence and by natural intelligence
we mean biological and sociological intelligence.

Shimaa Ahmed Mostafa
Shimaa Ahmed Mostafa

Posts : 9
Points : 13
Join date : 2010-10-22
Location : Maadi

Back to top Go down

About Artificial Intellegence Empty Artificial Intelligence

Post  Walid M. Gadalla Fri Oct 29, 2010 1:02 am

Artificial Intelligence has raised some interesting theories of human mind and thought, the
impact of understanding of the human brain. Thus one may understand Artificial
Intelligence as a discovery of the human rationality in its urge to be creative. The
development in Artificial Intelligence and in other cognitive Sciences has led us to the
diverse understanding of the word ‘intelligence’ and further more in the understanding of
Artificial Intelligence. This reply concerned very specifically the endless possibilities of
Artificial Intelligence as well as the toughest challenges it has to face in near future.
Toughest challenge precisely because it has tremendous impact on humanity philosophically,
socially, ethically and morally.
The possibilities of Artificial Intelligence are endless. It fills us with lots of good things in
future. Objectively speaking, "AI can have two purposes. One is to use the power of
computers to augment human thinking, just as we use motors to augment human or horse
power. Robotics and expert systems are major branches of that. The other is to use a
computer's artificial intelligence to understand how humans think…to understand the human
mind."
CHALLENGES OF ARTIFICIAL INTELLIGENCE
Computers are machines. Can we equate machines with humans? Certainly
computers are simulation of human brain. They work on the way humans work. In certain
sense we can say that computers too have minds and intellect as human have. So there is
some similarity between them. Yet there is big difference between a computer and human. It will be totally wrong to reduce humans to machines. Any machine-computers, robots or any
AI devices- cannot synthesize the supreme jewels of human existence, essence and
conscious awareness. A machine cannot enjoy the freedom a human being does. Machines
have no free choice. They are preprogrammed. They cannot do other than what they have
been programmed to do. They are dynamic yet single minded and purposive. They do not
have the ability of creativity that humans have.

Finally, when Artificial Intelligence researchers aim and claim of putting human
mind or brain into the computer, I presume, they say it so just metaphorically. Human mind
or brain has a lot of peculiarities, which hardly anyone serious about imitating them. To
understand human mind fully itself is a big challenge. How much more challenging it will
be to mimic such a human brain!

Walid M. Gadalla

Posts : 5
Points : 5
Join date : 2010-10-22
Location : Maadi

Back to top Go down

About Artificial Intellegence Empty Re: About Artificial Intellegence

Post  Maboali Sat Oct 30, 2010 5:55 pm

Introduction

Artificial Intelligence (AI) is a perfect example of how sometimes science moves more slowly than we would have predicted. In the first flush of enthusiasm at the invention of computers it was believed that we now finally had the tools with which to crack the problem of the mind, and within years we would see a new race of intelligent machines. We are older and wiser now. The first rush of enthusiasm is gone, the computers that impressed us so much back then do not impress us now, and we are soberly settling down to understand how hard the problems of AI really are.

What is AI?

In some sense it is engineering inspired by biology. We look at animals, we look at humans and we want to be able to build machines that do what they do. We want machines to be able to learn in the way that they learn, to speak, to reason and eventually to have consciousness. It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.

Branches of AI

Here's a list, but some branches are surely missing, because no-one has identified them yet. Some of these may be regarded as concepts or topics rather than full branches
• Logical AI
What a program knows about the world in general the facts of the specific situation in which it must act, and its goals are all represented by sentences of some mathematical logical language. The program decides what to do by inferring that certain actions are appropriate for achieving its goals
• Search
AI programs often examine large numbers of possibilities, e.g. moves in a chess game or inferences by a theorem proving program. Discoveries are continually made about how to do this more efficiently in various domains.
• Pattern Recognition
When a program makes observations of some kind, it is often programmed to compare what it sees with a pattern. For example, a vision program may try to match a pattern of eyes and a nose in a scene in order to find a face. More complex patterns, e.g. in a natural language text, in a chess position, or in the history of some event are also studied. These more complex patterns require quite different methods than do the simple patterns that have been studied the most.
• Representation
Facts about the world have to be represented in some way. Usually languages of mathematical logic are used.
• Inference
From some facts, others can be inferred. Mathematical logical deduction is adequate for some purposes, but new methods of non-monotonic inference have been added to logic since the 1970s. The simplest kind of non-monotonic reasoning is default reasoning in which a conclusion is to be inferred by default, but the conclusion can be withdrawn if there is evidence to the contrary. For example, when we hear of a bird, we man infer that it can fly, but this conclusion can be reversed when we hear that it is a penguin. It is the possibility that a conclusion may have to be withdrawn that constitutes the non-monotonic character of the reasoning. Ordinary logical reasoning is monotonic in that the set of conclusions that can the drawn from a set of premises is a monotonic increasing function of the premises. Circumscription is another form of non-monotonic reasoning.
• Common Sense Knowledge & Reasoning
This is the area in which AI is farthest from human-level, in spite of the fact that it has been an active research area since the 1950s. While there has been considerable progress, e.g. in developing systems of non-monotonic reasoning and theories of action, yet more new ideas are needed.
• Learning from Experience
Programs do that. The approaches to AI based on connection-ism and neural nets specialize in that. There is also learning of laws expressed in logic. [Mit97] is a comprehensive undergraduate text on machine learning. Programs can only learn what facts or behaviors their formalisms can represent, and unfortunately learning systems are almost all based on very limited abilities to represent information.
• Planning
Planning programs start with general facts about the world (especially facts about the effects of actions), facts about the particular situation and a statement of a goal. From these, they generate a strategy for achieving the goal. In the most common cases, the strategy is just a sequence of actions.
• Epistemology
This is a study of the kinds of knowledge that are required for solving problems in the world.
• Ontology
Ontology is the study of the kinds of things that exist. In AI, the programs and sentences deal with various kinds of objects, and we study what these kinds are and what their basic properties are. Emphasis on oncology begins in the 1990s.
• Heuristics
A heuristic is a way of trying to discover something or an idea imbedded in a program. The term is used variously in AI. Heuristic functions are used in some approaches to search to measure how far a node in a search tree seems to be from a goal. Heuristic predicates that compare two nodes in a search tree to see if one is better than the other, i.e. constitutes an advance toward the goal, and may be more useful.
• Genetic Programming
Genetic programming is a technique for getting programs to solve a task by mating random Lisp programs and selecting fittest in millions of generations.

Examples of AI Applications:

These are some of AI application but not all.
• Game
• Speech Recognition
• Understanding Natural Language
• Computer Vision
• Expert System
• Heuristic Classification


When did AI research start?

After the 2nd World War, a number of people independently started to work on intelligent machines. The English mathematician Alan Turing may have been the first. He gave a lecture on it in 1947. He also may have been the first to decide that AI was best researched by programming computers rather than by building machines. By the late 1950s, there were many researchers on AI, and most of them were basing their work on programming computers.

Things look very different now!—and not because the menu of possibilities has changed so much, though there are differences in emphasis now (nanotech and quantum computing were not so popular in the 50s, for instance). Rather, things look different because the plausible time-scale for the technological discontinuity associated with the advent of superhuman AI has become so excitingly near-term. There is even a popular label for this discontinuity: the Singularity. A reasonably large number of serious scientists now expect that superhuman AI, general-purpose molecular assemblers, uploading of human minds into software containers, and other amazing science-fictional feats may well be possible within the next century. Vernor Vinge, who originated the use of the term Singularity in this context4, said in 1993 that he expected the event to occur before 2030. Ray Kurzweil, who has become the best-known spokesman for the Singularity idea, estimates 2045 or so

One of the original goals of Artificial Intelligence (AI) was to create systems that had general intelligence, able to approach the breadth and depth of human-level intelligence (HLI). In the last five years, there has been a renewed interest in this pursuit with a significant increase in research in cognitive architectures and general intelligence as indicated by the first conference on Artificial General Intelligence. Although there is significant enthusiasm and activity, to date, evaluation of HLI systems has been weak, with few comparisons or evaluations of specific claims, making it difficult to determine when progress has been made. Moreover, shared evaluation procedures, and infrastructure are missing. Establishing these elements could bring together the existing community and attract additional researchers interested in HLI who are currently inhibited by the difficulty of breaking into the field.

Challenges in Evaluating Human-Level Intelligent Systems:

[justify]One of the first steps in determining how to evaluate research in a field is to develop a crisp definition its goals, and if possible, what the requirements are for achieving those goals. Legg and Hutter (2007) review a wide variety of informal and formal definitions and tests of intelligence. Unfortunately, none of these definitions provide practical guidance in how to evaluate and compare the current state of the art in HLI systems. Over fifty years ago, Turing (1950) tried to finesse the issue of defining HLI by creating a test that involved comparison to human behavior, the Turing Test. In this test, no analysis of the components of intelligence was necessary; the only question was whether or not a system behaved in a way that was indistinguishable from humans. Although widely known and popular with the press, the Turing Test has failed as a scientific tool because of its many flaws: it is informal, imprecise, and is not designed for easy replication. Moreover, it tests only a subset of characteristics normally associated with intelligence, and it does not have a set of incremental challenges that can pull science forward (Cohen, 2005). As a result, none of the major research projects pursuing HLI use the Turing Test as an evaluation tool

To Sum up, there are varieties of claims that can be made about HLI at the systems level:
• The human mind is somehow so incredibly complex that we just can’t figure out how to implement one, without reverse engineering the human brain.
• The human mind is somehow so incredibly simple that powerful intelligence can be achieved via one simple trick—say, logical theorem-proving; or back propagation in neural networks; or hierarchical pattern recognition; or uncertain inference; or evolutionary learning; etc. etc. Almost everyone who has seriously tried to make a thinking machine has fallen prey to the “one simple trick” fallacy.
Performance includes measures such as solution time, quality of solution, and whether or not a solution is found. These are the standard metrics used in evaluating AI systems. One must careful when using CPU time because of variation in the underlying hardware. Usually solution time will be in some hardware independent measure (such as nodes expanded in a search) that can then be mapped to specific hardware.
• Scalability involves change in some performance variable as problem complexity changes. Scalability is an important metric for HLI systems because of the need for large bodies of knowledge acquired through long-term learning. Other scalability issues can arise from interacting with complex environments where the number of relevant objects varies.
• Generality: How well does a system (or architecture) support behavior across a wide range of tasks and domains? Concerns about task and domain generality are one of the primary factors that distinguish research in HLI from much of the other research in AI. This requires measures of diversity of tasks and domains, which are currently lacking. Given the primacy of Generality, it is not surprising that many other abstract metrics address aspects of behavior and system construction that are related to generality.
• Expressivity: What kinds or range of knowledge can an HLI system accept and use to influence behavior? This relates to generality because restrictions on expressiveness can, in turn, restrict whether a system can successfully pursue a task in a domain. For example, systems that only support propositional representations will have difficulty reasoning about problems that are inherently relational.
• Robustness: How does speed or quality of solutions change as a task is perturbed or some knowledge is removed or added? One can also measure robustness of architecture – how behavior changes as an aspect of the architecture is degraded – but this is rarely considered an important feature of HLI systems. Instead, the interest lies in how well the system can respond to partial or incomplete knowledge, incorrect knowledge, and changes in a task that require some mapping of existing knowledge to a novel situation.
[justify]• Instructability: How well can a system accept knowledge from another agent? Instructability emphasizes acquiring new skills and knowledge, as well as acquiring new tasks. Finer-grain measures of Instructability include the language needed for instruction, the breadth of behavior that can be taught, and the types of interactions supported, such as whether the instructor is in control, whether the agent is in control, or whether dynamic passing of control occurs during instruction.
• Taskability: To what extent can a system accept and/or generate, understand, and start on a new task? Taskability is related to instructability, but focuses working on new tasks. Humans are inherently taskable and retaskable, being able to attempt new tasks without requiring a external programmer that understands its internal representations. Humans also generate new tasks on their own. In contrast, most current systems only pursue the tasks and subtasks with which they were originally programmed and cannot dynamically extend the tasks they pursue.
• Explainability: Can the system explain what it has learned or experienced, or why it is carrying out some behavior? Humans do not have “complete” explanability – the ability to provide justifications for all decisions leading up to external behavior – so this capability is a matter of degree.

In My Opinion

I suggest that, Human mind is a complex system involving many interlocking tricks cooperating to give rise to appropriate emergent structures and dynamics; but not so complex as to be beyond future capability to engineer.







Maboali
Maboali

Posts : 23
Points : 37
Join date : 2010-10-19
Age : 54
Location : Alexandria Egypt

https://www.facebook.com/EgyptLiveAtEdu

Back to top Go down

About Artificial Intellegence Empty Re: About Artificial Intellegence

Post  Mohamed AbdElnaby Sun Oct 31, 2010 6:45 pm

بسم الله الرحمن الرحيم

Artificial Inteligence
By
Mohamed AbdElnaby


1- What Is Artificial Inteligence? What Is AI?
2- Strong artificial intelligence
3- Weak artificial intelligence
4- Philosophical criticism and support of strong AI
5 - Reference

1- What Is Artificial Inteligence? What Is AI?

Artificial Intelligence is the part of computer science concerned with designing intelligent computer systems, that is, systems that exhibit the characteristics we associate with intelligence in Human Behavior, understanding language, learning, reasoning, solving problems, and so on(1).

Definition Artificial intelligence is a system in which machines imitate the human capabilities especially in decision making, and implementing the functions in a perfect way. There are more than one definition for artificial, but the definition mentioned above is the most proper to cover many functions.
Development
Artificial intelligence was developed to provide machines and developed system to deal with the dangerous affairs, such as Mineralogy
we can call Artificial intelligence by" the science of computer thinking" or" machine is intelligent like human being".
The concept and range of Artificial intelligence outline is very from machine translation to chess games and puzzles.
the countries which concentrates on" AI" were Japan, U.S.A and European countries.

"AI" depends on" Lisp" programming Language and" Prolog programming logic languages" These two language are very famous more than other language which have several aspects(2).

2- Strong AI and weak AI:

One popular and early definition of artificial intelligence research, put forth by John McCarthy at the Dartmouth Conference in 1956, is "making a machine behave in ways that would be called intelligent if a human were so behaving.", repeating the claim put forth by Alan Turing in "Computing machinery and intelligence" (Mind, October 1950). However this definition seems to ignore the possibility of strong AI (see below). Another definition of artificial intelligence is intelligence arising from an artificial device. Most definitions could be categorized as concerning either systems that think like humans, systems that act like humans, systems that think rationally or systems that act rationally.
Strong artificial intelligence
Strong artificial intelligence research deals with the creation of some form of computer-based artificial intelligence that can truly reason and solve problems; a strong form of AI is said to be sentient, or self-aware. In theory, there are two types of strong AI:
Human-like AI, in which the computer program thinks and reasons much like a human mind.
Non-human-like AI, in which the computer program develops a totally non-human sentience, and a non-human way of thinking and reasoning(3).

3- Weak artificial intelligence:

Weak artificial intelligence research deals with the creation of some form of computer-based artificial intelligence that can reason and solve problems only in a limited domain; such a machine would, in some ways, act as if it were intelligent, but it would not possess true intelligence or sentience. The classical test for such abilities is the Turing test.
There are several fields of weak AI, one of which is natural language. Many weak AI fields have specialised software or programming languages created for them. For example, the 'most-human' natural language chatterbot A.L.I.C.E. uses a programming language AIML that is specific to its program, and the various clones, named Alicebots.
To date, much of the work in this field has been done with computer simulations of intelligence based on predefined sets of rules. Very little progress has been made in strong AI. Depending on how one defines one's goals, a moderate amount of progress has been made in weak AI.

When viewed with a moderate dose of cynicism, weak artificial intelligence can be viewed as ‘the set of computer science problems without good solutions at this point.’ Once a sub-discipline results in useful work, it is carved out of artificial intelligence and given its own name. Examples of this are pattern recognition, image processing, neural networks, natural language processing, robotics and game theory. While the roots of each of these disciplines is firmly established as having been part of artificial intelligence, they are now thought of as somewhat separate(4).

4- Philosophical criticism and support of strong AI:

The term "Strong AI" was originally coined by John Searle and was applied to digital computers and other information processing machines. Searle defined strong AI:
"according to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind" (J Searle in Minds Brains and Programs. The Behavioral and Brain Sciences, vol. 3, 1980).
Searle and most others involved in this debate are addressing the problem of whether a machine that works solely through the transformation of encoded data could be a mind, not the wider issue of Monism versus Dualism (ie: whether a machine of any type, including biological machines, could contain a mind).
Searle points out in his Chinese Room Argument that information processors carry encoded data which describe other things. The encoded data itself is meaningless without a cross reference to the things it describes. This leads Searle to point out that there is no meaning or understanding in an information processor itself. As a result Searle claims to demonstrate that even a machine that passed the Turing test would not necessarily be conscious in the human sense.
Some philosophers hold that if Weak AI is accepted as possible then Strong AI must also be possible. Daniel C. Dennett argues in Consciousness Explained that if there is no magic spark or soul, then Man is just a machine, and he asks why the Man-machine should have a privileged position over all other possible machines when it comes to intelligence or 'mind'. Simon Blackburn in his introduction to philosophy, Think, points out that you might appear intelligent but there is no way of telling if that intelligence is real (ie: a 'mind'). However, if the discussion is limited to strong AI rather than artificial consciousness it may be possible to identify features of human minds that do not occur in information processing computers.
Strong AI seems to involve the following assumptions about the mind and brain:
1. the mind is software, a finite state machine so the Church-Turing thesis applies to it
2. presentism describes the mind
3. the brain is purely hardware (i.e. only follows the rules of a classical computer)
The first assumption is particularly problematic because of the old adage that any computer is just a glorified abacus. It is indeed possible to construct any type of information processor out of balls and wood, although such a device would be very slow and prone to failure, it would be able to do anything that a modern computer can do. This means that the proposition that information processors can be minds is equivalent to proposing that minds can exist as devices made of rolling balls in wooden channels.
Some (including Roger Penrose) attack the applicability of the Church-Turing thesis directly by drawing attention to the halting problem in which certain types of computation cannot be performed by information systems yet seem to be performed by human minds.
Ultimately the truth of Strong AI depends upon whether information processing machines can include all the properties of minds such as Consciousness. However, Weak AI is independent of the Strong AI problem and there can be no doubt that many of the features of modern computers such as multiplication or database searching might have been considered 'intelligent' only a century ago(5).

--------------------
5- Reference:

1- Stefano Franchi‏, Güven Güzeldere‏: Mechanical bodies, computational minds: artificial intelligence from automata to cyborgs, MIT Press‏m 2005, p 21
William A. Taylor‏: What every engineer should know about artificial intelligence, MIT Press, 1988, p 14.
for more defination

see also:

- Artificial intelligence - Definition:
http://www.wordiq.com/definition/Artificial_intelligence
- What does artificial intelligence mean
http://www.definitions.net/definition/artificial%20intelligence
- The ability of a computer or other machine to perform those activities that are normally thought to require intelligence.
The branch of computer science concerned with the development of machines having this ability.
artificial intelligence Definition from Answers_com
http://www.answers.com/topic/artificial-intelligence

- Free On-line Dictionary of Computing
http://foldoc.org//artificial+intelligence
- Cybernetics and Systems
http://pespmc1.vub.ac.be/ASC/ARTIFI_INTEL.html
- Computer Telephony & Electronics Dictionary and Glossary
http://www.csgnetwork.com/glossarya.html#artificial_intelligence
- I T Glossary
http://glossary.westnetinc.com/glossary.php?filter=A

other dictionaries

- American Heritage Dictionary of the English Language
http://education.yahoo.com/reference/dictionary/entry/artificial%20intelligence
- Merriam-Webster's Online Dictionary, 11th Edition
http://www.merriam-webster.com/dictionary/artificial+intelligence

--------------------
2- Ram Gopal Prasher‏: Library and Information Science: Information science, information technology and its application, Concept Publishing Company, 1997, p 10.
--------------------
3- Jon Schiller‏: Human Evolution: Neanderthals and Homosapiens, CreateSpace, 1984, p 148
Mark H. Bickhard,Loren Terveen‏: Foundational issues in artificial intelligence and cognitive science: impasse and solution, 1995, p 37.
- Robert Epstein, Gary Roberts, and Grace Beber: Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer, Springer; 1 edition, 2008, p 142.
- B. R. Hergenhahn‏: An Introduction to the History of Psychology, Cengage Learning, ‏,2009 p 629.
- Ela Kumar‏: Artificial Intelligence, I. K. International Pvt Ltdm 2009, p 13.
- Mark A. Bedau‏, Bedau/Cleland‏, Carol E. Cleland‏: The Nature of Life: Classical and Contemporary Perspectives from Philosophy and Science, Cambridge University Press, 2010, p 219.

see also:

- Strong AI - Wikipedia, the free encyclopedia
http://en.wikipedia.org/wiki/Strong_AI
- The Role of Language in Intelligence - Daniel C. Dennett
http://strong-artificial-intelligence.com/
- Strong Artificial Intelligence
http://strong-ai.info/
- Strong artificial intelligence (computer science) -- Britannica Online Encyclopedia
http://www.britannica.com/EBchecked/topic/752532/strong-artificial-intelligence
--------------------
4- James J. Sheehan‏, Morton Sosna‏: The boundaries of humanity: humans, animals, machines, 1991, p 139.
- Bill Hibbard‏: Super-intelligent machines, 2002, Springer, p 27.
- Matt Carter‏: Minds and computers: an introduction to the philosophy of artificial intelligence, Edinburgh University Press‏, 2007, p 100.
- B. R. Hergenhahn‏: An Introduction to the History of Psychology, Cengage Learning‏, 2009, p 629.
- Neeta Deshpande‏: Artificial Intelligence, Technical Publications‏, 2009 p 1- 3.

see also:

- Weak Artificial Intelligence
http://www.units.muohio.edu/psybersite/cyberspace/ai/weak.shtml
- M. Gams: Weak Intelligence
http://ai.ijs.si/mezi/weakAI/weakStrongAI.htm
--------------------
5- Sören Stenlund‏: Language and philosophical problems, Routledge, 1990, p 40.
- Larry Crockett‏: The Turing test and the frame problem: AI's mistaken understanding of intelligence, Intellect Books‏, 1994, p 70.
- Vladimir Aleksandrovich Smirnov‏: Philosophical logic and logical philosophy: essays in honour of Vladimir A. Smirnov, 1996, p 38.
- Alison Adam‏: Artificial knowing: gender and the thinking machine, Routledge, 1998, p 39.

see also:

- Hubert Dreyfus - Wikipedia, the free encyclopedia:
http://en.wikipedia.org/wiki/Hubert_Dreyfus
- Minds and Machines: Journal for Artificial Intelligence, Philosophy and Cognitive Science:
http://www.springer.com/computer/ai/journal/11023
--------------------
Mohamed AbdElnaby
Mohamed AbdElnaby
Mohamed AbdElnaby

Posts : 40
Points : 59
Join date : 2010-10-19

http://dr--mohamed-abd-elnaby.spaces.live.com/

Back to top Go down

About Artificial Intellegence Empty I want your Thoughts

Post  Dr. M EL Zayat Tue Nov 02, 2010 3:38 pm

Dear all,
You all have done a wonderful job collecting all this information about AI
I want to see your thoughts
I said before, read, analyze, reason and write you own thoughts
I want your comments on what you have learned after reading each of these topics
Don't just Copy and Paste.

Thanks,
Dr. Mohamed EL Zayat

Dr. M EL Zayat
Admin

Posts : 32
Points : 40
Join date : 2010-10-19
Location : 33 Almesaha St, Aldokki - Giza - Egypt

https://melzayat.all-up.com

Back to top Go down

About Artificial Intellegence Empty Re: About Artificial Intellegence

Post  Rasha Badran Thu Nov 04, 2010 9:37 am

ميرسي كتير ع المعلومات الجميلة وحبيت كتير الفيديو
من وجهة نظري الشخصية ان استخدام الذكاء الصناعي احيانا يؤدي الى عزل الانسان عن العالم وارتباطه بعالم افتراضي
وان ذلك من سلبيات استخدامه
Project natal
سلبياته اكبر من ايجابياته فهم عالم افتراضي ويمكن اكون مخطئة ولكن ده ما لاحظته من عرض امكانياته
ولكن ارى ان الروبوت ايجابياته اكبر من سلبياته
شكرا مرة تانيه استاذ مصطفى واستاذ محمد
Rasha Badran
Rasha Badran

Posts : 100
Points : 107
Join date : 2010-10-19
Age : 42

Back to top Go down

About Artificial Intellegence Empty Artificial intelligence

Post  sobhyhossen Sat Nov 20, 2010 1:53 pm

Artificial intelligence
From Wikipedia, the free encyclopedia
Jump to: navigation, search
It has been suggested that Scripts (artificial intelligence) be merged into this article or section. (Discuss)

"AI" redirects here. For other uses, see Ai (disambiguation).
For other uses, see Artificial intelligence (disambiguation).
Artificial intelligence (AI) is the intelligence of machines and the branch of computer science that aims to create it. AI textbooks define the field as "the study and design of intelligent agents where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success. John McCarthy, who coined the term in 1956, defines it as "the science and engineering of making intelligent machines."
The field was founded on the claim that a central property of humans, intelligence—the sapience of Homo sapiens—can be so precisely described that it can be simulated by a machine. This raises philosophical issues about the nature of the mind and the ethics of creating artificial beings, issues which have been addressed by myth, fiction and philosophy since antiquity. Artificial intelligence has been the subject of optimism, but has also suffered setbacks and, today, has become an essential part of the technology industry, providing the heavy lifting for many of the most difficult problems in computer science. AI research is highly technical and specialized, deeply divided into subfields that often fail to communicate with each other. Subfields have grown up around particular institutions, the work of individual researchers, the solution of specific problems, longstanding differences of opinion about how AI should be done and the application of widely differing tools. The central problems of AI include such traits as reasoning, knowledge, planning, learning, communication, perception and the ability to move and manipulate objects. General intelligence (or "strong AI") is still among the field's long term goals.
History
Main articles: History of artificial intelligence and Timeline of artificial intelligence
Thinking machines and artificial beings appear in Greek myths, such as Talos of Crete, the golden robots of Hephaestus and Pygmalion's Galatea. Human likenesses believed to have intelligence were built in every major civilization: animated statues were seen in Egypt and Greece and humanoid automatons were built by Yan Shi, Hero of Alexandria, Al-Jazariand Wolfgang von Kempelen. It was also widely believed that artificial beings had been created by Jābir ibn Hayyān, Judah Loew[20] and Paracelsus. By the 19th and 20th centuries, artificial beings had become a common feature in fiction, as in Mary Shelley's Frankenstein or Karel Čapek's R.U.R. (Rossum's Universal Robots). Pamela McCorduck argues that all of these are examples of an ancient urge, as she describes it, "to forge the gods". Stories of these creatures and their fates discuss many of the same hopes, fears and ethical concerns that are presented by artificial intelligence.
Mechanical or "formal" reasoning has been developed by philosophers and mathematicians since antiquity. The study of logic led directly to the invention of the programmable digital electronic computer, based on the work of mathematician Alan Turing and others. Turing's theory of computation suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate any conceivable act of mathematical deduction This, along with recent discoveries in neurology, information theory and cybernetics, inspired a small group of researchers to begin to seriously consider the possibility of building an electronic brain. The field of AI research was founded at a conference on the campus of Dartmouth College in the summer of 1956. The attendees, including John McCarthy, Marvin Minsky, Allen Newell and Herbert Simon, became the leaders of AI research for many decades. They and their students wrote programs that were, to most people, simply astonishing: computers were solving word problems in algebra, proving logical theorems and speaking English By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defenseand laboratories had been established around the world. AI's founders were profoundly optimistic about the future of the new field: Herbert Simon predicted that "machines will be capable, within twenty years, of doing any work a man can do" and Marvin Minsky agreed, writing that "within a generation ... the problem of creating 'artificial intelligence' will substantially be solved"
They had failed to recognize the difficulty of some of the problems they faced. In 1974, in response to the criticism of England's Sir James Lighthill and ongoing pressure from Congress to fund more productive projects, the U.S. and British governments cut off all undirected, exploratory research in AI. The next few years, when funding for projects was hard to find, would later be called an "AI winter".In the early 1980s, AI research was revived by the commercial success of expert systems, a form of AI program that simulated the knowledge and analytical skills of one or more human experts. By 1985 the market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer project inspired the U.S and British governments to restore funding for academic research in the field. However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer lasting AI winter began.
In the 1990s and early 21st century, AI achieved its greatest successes, albeit somewhat behind the scenes. Artificial intelligence is used for logistics, data mining, medical diagnosis and many other areas throughout the technology industry. The success was due to several factors: the increasing computational power of computers (see Moore's law), a greater emphasis on solving specific subproblems, the creation of new ties between AI and other fields working on similar problems, and a new commitment by researchers to solid mathematical methods and rigorous scientific standards.
Problems
The general problem of simulating (or creating) intelligence has been broken down into a number of specific sub-problems. These consist of particular traits or capabilities that researchers would like an intelligent system to display. The traits described below have received the most attention.
Deduction, reasoning, problem solving
Early AI researchers developed algorithms that imitated the step-by-step reasoning that humans were often assumed to use when they solve puzzles, play board games or make logical deductions. By the late 1980s and '90s, AI research had also developed highly successful methods for dealing with uncertain or incomplete information, employing concepts from probability and economics. For difficult problems, most of these algorithms can require enormous computational resources — most experience a "combinatorial explosion": the amount of memory or computer time required becomes astronomical when the problem goes beyond a certain size. The search for more efficient problem solving algorithms is a high priority for AI research. Human beings solve most of their problems using fast, intuitive judgments rather than the conscious, step-by-step deduction that early AI research was able to model. AI has made some progress at imitating this kind of "sub-symbolic" problem solving: embodied agent approaches emphasize the importance of sensorimotor skills to higher reasoning; neural net research attempts to simulate the structures inside human and animal brains that give rise to this skill.
Knowledge representation
Main articles: Knowledge representation and Commonsense knowledge
Knowledge representation and knowledge engineeringare central to AI research. Many of the problems machines are expected to solve will require extensive knowledge about the world. Among the things that AI needs to represent are: objects, properties, categories and relations between objects; situations, events, states and time; causes and effects; knowledge about knowledge (what we know about what other people know); and many other, less well researched domains. A complete representation of "what exists" is an ontology (borrowing a word from traditional philosophy), of which the most general are called upper ontologies.
Among the most difficult problems in knowledge representation are:
Default reasoning and the qualification problem
Many of the things people know take the form of "working assumptions." For example, if a bird comes up in conversation, people typically picture an animal that is fist sized, sings, and flies. None of these things are true about all birds. John McCarthy identified this problem in 1969 as the qualification problem: for any commonsense rule that AI researchers care to represent, there tend to be a huge number of exceptions. Almost nothing is simply true or false in the way that abstract logic requires. AI research has explored a number of solutions to this problem.
The breadth of commonsense knowledge
The number of atomic facts that the average person knows is astronomical. Research projects that attempt to build a complete knowledge base of commonsense knowledge (e.g., Cyc) require enormous amounts of laborious ontological engineering — they must be built, by hand, one complicated concept at a time. A major goal is to have the computer understand enough concepts to be able to learn by reading from sources like the internet, and thus be able to add to its own ontology.[citation needed]
The subsymbolic form of some commonsense knowledge
Much of what people know is not represented as "facts" or "statements" that they could express verbally. For example, a chess master will avoid a particular chess position because it "feels too exposed"or an art critic can take one look at a statue and instantly realize that it is a fake. These are intuitions or tendencies that are represented in the brain non-consciously and sub-symbolically. Knowledge like this informs, supports and provides a context for symbolic, conscious knowledge. As with the related problem of sub-symbolic reasoning, it is hoped that situated AI or computational intelligence will provide ways to represent this kind of knowledge.
Planning
Main article: Automated planning and scheduling
Intelligent agents must be able to set goals and achieve them. They need a way to visualize the future (they must have a representation of the state of the world and be able to make predictions about how their actions will change it) and be able to make choices that maximize the utility (or "value") of the available choices.
In classical planning problems, the agent can assume that it is the only thing acting on the world and it can be certain what the consequences of its actions may be. However, if this is not true, it must periodically check if the world matches its predictions and it must change its plan as this becomes necessary, requiring the agent to reason under uncertainty.
Multi-agent planning uses the cooperation and competition of many agents to achieve a given goal. Emergent behavior such as this is used by evolutionary algorithms and swarm intelligence.
Learning
Main article: Machine learning
Machine learning has been central to AI research from the beginning. Unsupervised learning is the ability to find patterns in a stream of input. Supervised learning includes both classification and numerical regression. Classification is used to determine what category something belongs in, after seeing a number of examples of things from several categories. Regression takes a set of numerical input/output examples and attempts to discover a continuous function that would generate the outputs from the inputs. In reinforcement learning the agent is rewarded for good responses and punished for bad ones. These can be analyzed in terms of decision theory, using concepts like utility. The mathematical analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory.
Natural language processing

ASIMO uses sensors and intelligent algorithms to avoid obstacles and navigate stairs.
Main article: Natural language processing
Natural language processing gives machines the ability to read and understand the languages that humans speak. Many researchers hope that a sufficiently powerful natural language processing system would be able to acquire knowledge on its own, by reading the existing text available over the internet. Some straightforward applications of natural language processing include information retrieval (or text mining) and machine translation.
Motion and manipulation
Main article: Robotics
The field of robotics[66] is closely related to AI. Intelligence is required for robots to be able to handle such tasks as object manipulation and navigation, with sub-problems of localization (knowing where you are), mapping (learning what is around you) and motion planning (figuring out how to get there).
Perception
Main articles: Machine perception, Computer vision, and Speech recognition
Machine perception is the ability to use input from sensors (such as cameras, microphones, sonar and others more exotic) to deduce aspects of the world. Computer vision is the ability to analyze visual input. A few selected subproblems are speech recognition, facial recognition and object recognition.
Social intelligence
Main article: Affective computing

Kismet, a robot with rudimentary social skills
Emotion and social skills play two roles for an intelligent agent. First, it must be able to predict the actions of others, by understanding their motives and emotional states. (This involves elements of game theory, decision theory, as well as the ability to model human emotions and the perceptual skills to detect emotions.) Also, for good human-computer interaction, an intelligent machine also needs to display emotions. At the very least it must appear polite and sensitive to the humans it interacts with. At best, it should have normal emotions itself.
Creativity
Main article: Computational creativity

TOPIO, a robot that can play table tennis, developed by TOSY.
A sub-field of AI addresses creativity both theoretically (from a philosophical and psychological perspective) and practically (via specific implementations of systems that generate outputs that can be considered creative, or systems that identify and assess creativity). A related area of computational research is Artificial Intuition and Artificial Imagination.
General intelligence
Main articles: Strong AI and AI-complete
Most researchers hope that their work will eventually be incorporated into a machine with general intelligence (known as strong AI), combining all the skills above and exceeding human abilities at most or all of them. A few believe that anthropomorphic features like artificial consciousness or an artificial brain may be required for such a project.
Many of the problems above are considered AI-complete: to solve one problem, you must solve them all. For example, even a straightforward, specific task like machine translation requires that the machine follow the author's argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author's intention (social intelligence). Machine translation, therefore, is believed to be AI-complete: it may require strong AI to be done as well as humans can do it.
Approaches
There is no established unifying theory or paradigm that guides AI research. Researchers disagree about many issues A few of the most long standing questions that have remained unanswered are these: should artificial intelligence simulate natural intelligence, by studying psychology or neurology? Or is human biology as irrelevant to AI research as bird biology is to aeronautical engineering? Can intelligent behavior be described using simple, elegant principles (such as logic or optimization)? Or does it necessarily require solving a large number of completely unrelated problems? Can intelligence be reproduced using high-level symbols, similar to words and ideas? Or does it require "sub-symbolic" processing?
Cybernetics and brain simulation
Main articles: Cybernetics and Computational neuroscience
There is no consensus on how closely the brain should be simulated.
In the 1940s and 1950s, a number of researchers explored the connection between neurology, information theory, and cybernetics. Some of them built machines that used electronic networks to exhibit rudimentary intelligence, such as W. Grey Walter's turtles and the Johns Hopkins Beast. Many of these researchers gathered for meetings of the Teleological Society at Princeton University and the Ratio Club in England. By 1960, this approach was largely abandoned, although elements of it would be revived in the 1980s.
Symbolic
Main article: GOFAI
When access to digital computers became possible in the middle 1950s, AI research began to explore the possibility that human intelligence could be reduced to symbol manipulation. The research was centered in three institutions: CMU, Stanford and MIT, and each one developed its own style of research. John Haugeland named these approaches to AI "good old fashioned AI" or "GOFAI
Cognitive simulation
Economist Herbert Simon and Allen Newell studied human problem solving skills and attempted to formalize them, and their work laid the foundations of the field of artificial intelligence, as well as cognitive science, operations research and management science. Their research team used the results of psychological experiments to develop programs that simulated the techniques that people used to solve problems. This tradition, centered at Carnegie Mellon University would eventually culminate in the development of the Soar architecture in the middle 80s.
Logic based
Unlike Newell and Simon, John McCarthy felt that machines did not need to simulate human thought, but should instead try to find the essence of abstract reasoning and problem solving, regardless of whether people used the same algorithms. His laboratory at Stanford (SAIL) focused on using formal logic to solve a wide variety of problems, including knowledge representation, planning and learning Logic was also focus of the work at the University of Edinburgh and elsewhere in Europe which led to the development of the programming language Prolog and the science of logic programming.
"Anti-logic" or "scruffy"
Researchers at MIT (such as Marvin Minsky and Seymour Papert)[85] found that solving difficult problems in vision and natural language processing required ad-hoc solutions – they argued that there was no simple and general principle (like logic) that would capture all the aspects of intelligent behavior. Roger Schank described their "anti-logic" approaches as "scruffy" (as opposed to the "neat" paradigms at CMU and Stanford). Commonsense knowledge bases (such as Doug Lenat's Cyc) are an example of "scruffy" AI, since they must be built by hand, one complicated concept at a time.
Knowledge based
When computers with large memories became available around 1970, researchers from all three traditions began to build knowledge into AI applications. This "knowledge revolution" led to the development and deployment of expert systems (introduced by Edward Feigenbaum), the first truly successful form of AI software. The knowledge revolution was also driven by the realization that enormous amounts of knowledge would be required by many simple AI applications.
Sub-symbolic
During the 1960s, symbolic approaches had achieved great success at simulating high-level thinking in small demonstration programs. Approaches based on cybernetics or neural networks were abandoned or pushed into the background By the 1980s, however, progress in symbolic AI seemed to stall and many believed that symbolic systems would never be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition. A number of researchers began to look into "sub-symbolic" approaches to specific AI problems
Bottom-up, embodied, situated, behavior-based or nouvelle AI
Researchers from the related field of robotics, such as Rodney Brooks, rejected symbolic AI and focused on the basic engineering problems that would allow robots to move and survive. Their work revived the non-symbolic viewpoint of the early cybernetics researchers of the 50s and reintroduced the use of control theory in AI. This coincided with the development of the embodied mind thesis in the related field of cognitive science: the idea that aspects of the body (such as movement, perception and visualization) are required for higher intelligence.
Computational Intelligence
Interest in neural networks and "connectionism" was revived by David Rumelhart and others in the middle 1980s.[90] These and other sub-symbolic approaches, such as fuzzy systems and evolutionary computation, are now studied collectively by the emerging discipline of computational intelligence.
Statistical
In the 1990s, AI researchers developed sophisticated mathematical tools to solve specific subproblems. These tools are truly scientific, in the sense that their results are both measurable and verifiable, and they have been responsible for many of AI's recent successes. The shared mathematical language has also permitted a high level of collaboration with more established fields (like mathematics, economics or operations research). Stuart Russell and Peter Norvig describe this movement as nothing less than a "revolution" and "the victory of the neats."

sobhyhossen

Posts : 21
Points : 32
Join date : 2010-11-03
Age : 54

Back to top Go down

About Artificial Intellegence Empty من وجهة نظري

Post  sobhyhossen Sat Nov 20, 2010 2:35 pm

الذكاء الاصطناعي من وجهة نظري هو اندماج الألة أو البرامج مع البشر بحيث يصل في النهاية إلى أن البرامج التي يصممها الانسان تحاكي العقل البشري بقدر ذكاء الانسان ويمكن أن توضع البرامج لمستوي أعلى من البشر بحيث تصل المحاكاة إلى حد أن الجميع يمكنه التعامل معها ولكن من يتوصل إلى هذا الحد من البرمجه ؟
المشاكل :
أن اعتقد أن هناك بعض المشاكل في هذه البرامج وهي أن هناك مفاجأت يمكن أن تحدث في تصميم برنامج محاكاة معين بين الإنسان والألة فيحدث من هنا فروق وبعض السقطات لهذه البرامج من هنا يحدث بعض الفروق بين البرامج التي يصممها الانسان وبين الذكاء الطبيعي للانسان لأن هناك بعض الاشخاص يمكن ان يكون له ذكاء أكبر من التي توضع داخل البرامج الموضوع في الذكاء الصناعي ولا ننسى أن قدرة الله لا تفوقها قدرة البشر .


sobhyhossen

Posts : 21
Points : 32
Join date : 2010-11-03
Age : 54

Back to top Go down

About Artificial Intellegence Empty نظرة تحليلية جيدة

Post  Dr. M EL Zayat Sun Nov 21, 2010 12:12 pm

أشكرك أ. صبحى على هذا التحليل الجيد المختصر
شكرا جزيلا
د. محمد الزيات

Dr. M EL Zayat
Admin

Posts : 32
Points : 40
Join date : 2010-10-19
Location : 33 Almesaha St, Aldokki - Giza - Egypt

https://melzayat.all-up.com

Back to top Go down

About Artificial Intellegence Empty some challenges of AI

Post  ghadahilal Sun Dec 05, 2010 12:18 pm

1. Biological systemss can adapt to environements but AI not.
ممكن صحيح في بعض الأحوال يموت الأنسان لو برد شديد مثلا انما اغلب الوقت بيحصل تكييف
2. It is hard to take things from success and apply them to new problems.
it is not rare that each problem needs an indevedeull soultion.
وده بيعوق التقدم علشان كل واحد لازم يبدي المشكلة من اولها
3.Programs should be more robust.
لأن أي غلطة حتى لو بسيطة بتعطل البرنامج كله
4.Modern Chess programs completly rely on deep search trees and play chess not at all like hummans.

ghadahilal

Posts : 3
Points : 4
Join date : 2010-10-23

Back to top Go down

About Artificial Intellegence Empty it started in old Egypt

Post  aabdelkader Tue Dec 07, 2010 11:19 am

Human likenesses believed to have intelligence were built in every major civilization: animated statues were seen in Egypt and Greece and humanoid automatons were built by Yan Shi, Hero of Alexandria, Al-Jazari and Wolfgang von Kempelen.

aabdelkader

Posts : 12
Points : 14
Join date : 2010-11-09

Back to top Go down

About Artificial Intellegence Empty Re: About Artificial Intellegence

Post  Sponsored content


Sponsored content


Back to top Go down

Back to top

- Similar topics

 
Permissions in this forum:
You cannot reply to topics in this forum