Skip to content

Commit cb8c05b

Browse files
updated machine learning
1 parent 6f8956a commit cb8c05b

File tree

1 file changed

+31
-15
lines changed

1 file changed

+31
-15
lines changed

content/latest/machine_learning.md

Lines changed: 31 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -12,26 +12,28 @@ script = 'animation'
1212
<div class="example">
1313
<dl>
1414
<dt>Is true AI possible?</dt>
15-
<dd>It is very hard to define a human mind with a such mathematical rigor as it is possible to define a Turing machine. We still do not have a working model of a mouse brain however we have the hardware capable of simulating it. A mouse has around 4 million neurons in the cerebral cortex. A human being has 80-120 billion neurons (19-23 billion neocortical). Thus, you can imagine how much more research will need to be conducted in order to get a working model of a human mind.
15+
<dd>It is very hard to define a human mind with the mathematical rigor of a Turing machine (a programmable algorithm). Although we still do not have a working model of a mouse brain, we do have hardware capable of simulating it. A mouse has around 4 million neurons in the cerebral cortex. A human being has 80-120 billion neurons. Thus, you can imagine how much more research will need to be conducted in order to get a working model of a human mind.
1616

17-
You could argue that we only need to do top-down approach and do not need to understand individual workings of every neuron. In that case you might study some non-monotonic logic, abductive reasoning, decision theory, etc. When the new theories come, more exceptions and paradoxes occur. And it seems we are nowhere close to a working model of a human mind.
17+
Is there any set of rules that can define the entire scope of human expression? You could argue that we only need to do top-down approach and do not need to understand individual workings of every neuron. In that case you might study some non-monotonic logic, abductive reasoning, decision theory, etc. When new theories come, more exceptions and paradoxes occur. Alternatively, how could any simple logic encompass the evolving meaning that humans attribute to words and abstract ideas?
1818

19-
After taking propositional and then predicate calculus I asked my logic professor:
20-
"Is there any logic that can define the whole set of human language?"
21-
He said:
22-
"How would you define the following?
23-
To see a World in a grain of sand
24-
And a Heaven in a wild flower,
25-
Hold Infinity in the palm of your hand
26-
And Eternity in an hour.
27-
If you can do it, you will become famous."
19+
Although in it's early stages, machine learning attempts to solve this dilemma. By enriching a computer with the ability to improve its output through positive and negative feedback loops, it takes another step closer towards true artificial intelligence. But for now these models are merely Pavlovian conditioning on a singularly focused skill set (in this case natural language). <a href="https://en.wikipedia.org/wiki/G_factor_(psychometrics)">General intelligence</a> displayed by humans is a much broader test of intelligence - one that summarizes positive correlations among different cognitive tasks, reflecting the fact that an individual's performance on one type of cognitive task tends to be comparable to that person's performance on other kinds of cognitive tasks.
20+
21+
Douglas Hofstadter, in his books Gödel, Escher, Bach and I Am a Strange Loop, cites <a href="http://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems#Minds_and_machines">Gödel's theorems</a> as an example of what he calls a strange loop, a hierarchical, self-referential structure existing within an axiomatic formal system. He argues that this is the same kind of structure which gives rise to consciousness, the sense of "I", in the human mind. While the self-reference in Gödel's theorem comes from the Gödel sentence asserting its own unprovability (ie This sentence is false.), the self-reference in the human mind comes from the way in which the brain abstracts and categorises stimuli into "symbols", or groups of neurons which respond to concepts, in what is effectively also a formal system, eventually giving rise to symbols modelling the concept of the very entity doing the perception. Hofstadter argues that a strange loop in a sufficiently complex formal system can give rise to a "downward" or "upside-down" causality, a situation in which the normal hierarchy of cause-and-effect is flipped upside-down. In the case of Gödel's theorem, this manifests, in short, as the following:
22+
23+
"Merely from knowing the formula's meaning, one can infer its truth or falsity without any effort to derive it in the old-fashioned way, which requires one to trudge methodically "upwards" from the axioms. This is not just peculiar; it is astonishing. Normally, one cannot merely look at what a mathematical conjecture says and simply appeal to the content of that statement on its own to deduce whether the statement is true or false."
24+
25+
For example, calculating Pi for a human would yield π, whereas a computer would only stop calculating after it has run out of memory and crashed. According to Godel's incompleteness theorem, a computer cannot escape the inherent limitations of a formal axiomatic ruleset. In the case of the mind, a far more complex formal system, this "downward causality" manifests, in Hofstadter's view, as the ineffable human instinct that the causality of our minds lies on the high level of desires, concepts, personalities, thoughts and ideas, rather than on the low level of interactions between neurons or even fundamental particles, even though according to physics the latter seems to possess the causal power.
26+
27+
"There is thus a curious upside-downness to our normal human way of perceiving the world: we are built to perceive “big stuff” rather than “small stuff”, even though the domain of the tiny seems to be where the actual motors driving reality reside."
28+
29+
Thus, cognition is a function of how one's own brain categorizes stimuli into a formal system. The presence of free will becomes apparent if these higher level abstractions disagree with the underlying stimuli which gave rise to it.
30+
31+
Looked at this way, Gödel's proof suggests – though by no means does it prove! – that there could be some high-level way of viewing the mind/brain, involving concepts which do not appear on lower levels, and that this level might have explanatory power that does not exist – not even in principle – on lower levels. It would mean that some facts could be explained on the high level quite easily, but not on lower levels at all. No matter how long and cumbersome a low-level statement were made, it would not explain the phenomena in question.
32+
33+
What might such high-level concepts be? It has been proposed for eons, by various holistically or "soulistically" inclined scientists and humanists that consciousness is a phenomenon that escapes explanation in terms of brain components; so here is a candidate at least. There is also the ever-puzzling notion of free will. So perhaps these qualities could be "emergent" in the sense of requiring explanations which cannot be furnished by the physiology alone.
2834

29-
There have been debates that a human mind might be equivalent to a Turing machine. However, a more interesting result would be for a human mind not to be Turing-equivalent, that it would give a rise to a definition of an algorithm that is not possibly computable by a Turing machine. Then the Church's thesis would not hold and there could possibly be a general algorithm that could solve a halting problem.
3035

31-
Read more on <ins>Godel's Incompleteness theorem</ins>:
3236

33-
* [Minds and machines](http://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems#Minds_and_machines)
34-
* [Godelian arguments](http://en.wikipedia.org/wiki/Mechanism_(philosophy)#G.C3.B6delian_arguments)
3537
</dd>
3638
<dt>Machine learning vs Statistics</dt>
3739
<dd>The basic premise of machine learning (ML) is to build algorithms that can receive input data and use statistical analysis to predict an output while updating outputs as new data becomes available. Statistics focuses on quantifying uncertainty by formalizing the relationship between variables and mathematical equations. ML focuses on prediction and classification by using algorithms that learn from data instead of explicity programmed instructions.</dd><br/>
@@ -106,6 +108,20 @@ Read more on <ins>Godel's Incompleteness theorem</ins>:
106108
Gradient Descent
107109
</figcaption>
108110
</div>
111+
<dt>Neural network analogy</dt>
112+
<dd><b>Input layer</b> = Eyes</dd>
113+
<dd><b>Hidden layer</b> = Brain</dd>
114+
<div style="margin-left:20px">
115+
<dd>Activation function = Neurons passing signals to other neurons in the brain</dd>
116+
</div>
117+
<dd><b>Output layer</b> = Consciousness (how the brain perceived the stimuli from the eyes)</dd>
118+
<div style="margin-left:20px">
119+
<dd>Optimization function = Learning by updating the weights/biases based on the accuracy of previous outputs</dd>
120+
</div>
121+
122+
123+
124+
</dd>
109125
</dl>
110126
<dl>
111127
<dt>CNN</dt>

0 commit comments

Comments
 (0)