Róró
Re: Róró
(Jedna slabina moje a phpBB je, že vyrobím příspěvek a zapomenu ho odeslat. Takže znovu.)
@Jigsaw_You@mastodon.nl bústl: @dingemansemark@scholar.social postl: https://scholar.social/@dingemansemark/ ... 6377573295 : https://davekarpf.substack.com/p/a-memo ... re-not-the .
@Jigsaw_You@mastodon.nl bústl: @alex@dair-community.social postla: https://dair-community.social/@alex/112225155248997052 plus @Jigsaw_You@mastodon.nl potl: https://mastodon.nl/@Jigsaw_You/112224691784730144 a oboje vede sem: zamčené: https://www.ft.com/content/648228e7-11e ... 5a638e6135 odemčené: https://archive.is/jQ3PX .
@Jigsaw_You@mastodon.nl bústl: @dingemansemark@scholar.social postl: https://scholar.social/@dingemansemark/ ... 6377573295 : https://davekarpf.substack.com/p/a-memo ... re-not-the .
@Jigsaw_You@mastodon.nl bústl: @alex@dair-community.social postla: https://dair-community.social/@alex/112225155248997052 plus @Jigsaw_You@mastodon.nl potl: https://mastodon.nl/@Jigsaw_You/112224691784730144 a oboje vede sem: zamčené: https://www.ft.com/content/648228e7-11e ... 5a638e6135 odemčené: https://archive.is/jQ3PX .
Re: Róró
Navazuju na https://trojkatretiho.cz/viewtopic.php?p=492#p492 .
Vrátila mě k tomu tato dvoutútová konverzace:
https://scholar.social/@davidschlangen/ ... 6210731511
https://scholar.social/@dingemansemark/ ... 9599176798
Která znovu odkazuje na:
https://osf.io/preprints/psyarxiv/4cbuv
https://osf.io/4cbuv/download/
ReclAIming_AI_2023.pdf
Abstract:
The idea that human cognition is, or can be understood as, a form of computation is a useful conceptual tool for cognitive science. It was a foundational assumption during the birth of cognitive science as a multidisciplinary field, with Artificial Intelligence (AI) as one of its contributing fields. One conception of AI in this context is as a provider of computational tools (frameworks, concepts, formalisms, models, proofs, simulations, etc.) that support theory building in cognitive science. The contemporary field of AI, however, has taken the theoretical possibility of explaining human cognition as a form of computation to imply the practical feasibility of realising human(-like or -level) cognition in factual computational systems; and, the field frames this realisation as a short-term inevitability. Yet, as we formally prove herein, creating systems with human(-like or -level) cognition is intrinsically computationally intractable. This means that any factual AI systems created in the short-run are at best decoys. When we think these systems capture something deep about ourselves and our thinking, we induce distorted and impoverished images of ourselves and our cognition. In other words, AI in current practice is deteriorating our theoretical understanding of cognition rather than advancing and enhancing it. The situation could be remediated by releasing the grip of the currently dominant view on AI and by returning to the idea of AI as a theoretical tool for cognitive science. In reclaiming this older idea of AI, however, it is important not to repeat conceptual mistakes of the past (and present) that brought us to where we are today.
Vypisuju si:
We then present a formal proof that the prob-
lem that Dr. Ingenia sets out to solve is intractable (formally,
NP-hard; i.e., possible in principle but provably infeasible;
see the section Ingenia Theorem). We also unpack how and
why our proof shows that the AI-as-engineering approach is
a theoretical dead-end for cognitive science.
Box 1 — Implications of intractability
Because AI-by-Learning is intractable (formally, NP-
hard under randomized reductions), the sample-and-time
requirements grow non-polynomially (e.g. exponentially
or worse) in n. To illustrate just how quickly this would
exhaust all the resources available in the universe, even
for moderate input size n, let us do a simple thought ex-
periment: Imagine we are looking for an AI that can re-
spond appropriately to different situations corresponding
to conversations of, say, 15 minutes. Since people speak
around 160 words per minute on average (Yuan, Liber-
man, & Cieri, 2006, see also Dingemanse & Liesenfeld,
2022; Liesenfeld & Dingemanse, 2022), let us take 60
words per minute as a generous lower bound. Then a con-
versation would have on average 900 words. For humans,
the appropriate response may depend on the full context
of the conversation, and we have no problem conditioning
our behaviour in this way. To encode such sequences of
spoken words in some binary encoding, we would need
more bits than words; i.e. n > 900. The assumption of us-
ing 1 bit per word is an underestimation, assuming that at
each point, the conversation can continue grammatically
correctly in at least two directions (cf. Parberry, 1997).
Now the AI needs to learn to respond appropriately to
conversations of this size (and not just to short prompts).
Since resource requirements for AI-by-Learning grow ex-
ponentially or worse, let us take a simple exponential
function O(2n) as our proxy of the order of magnitude
of resources needed as a function of n. 2^900 ∼ 10^270 is
already unimaginably larger than the number of atoms in
the universe (∼ 10^81). Imagine us sampling this super-
astronomical space of possible situations using so-called
‘Big Data’. Even if we grant that billions of trillions (10^21)
of relevant data samples could be generated (or scraped)
and stored, then this is still but a miniscule proportion of
the order of magnitude of samples needed to solve the
learning problem for even moderate size n. It is thus no
surprise that AI companies that are trying to construct
AIs using machine learning are running out of useable
data (Shumailov et al., 2023; P. Villalobos et al., 2022)
and that actual datasets are not being scaled up to more
and more complex and diverse real-world situations and
behaviours, but they are becoming more homogeneous
(with even harmful consequences; Birhane, Prabhu, Han,
& Boddeti, 2023). That nevertheless ‘large data sets’ (in-
correctly) appeared to be sufficient for solving a problem
like AI-by-Learning, can be explained by the fact that
people generally have poor intuitions about large numbers
(Landy, Silbert, & Goldin, 2013) and underestimate how
fast exponential functions grow (van Rooij, 2018; Wage-
naar & Sagaria, 1975; Wagenaar & Timmers, 1978, 1979).
Hence, contrary to intuition, one cannot extrapolate from
the perceived current rate of progress to the conclusion
that AGI is soon to be attained.
Já se celým článkem pořád ještě neprokousal, ale už začínám tušit, co vlastně říkají. Asi je jasné, že (obecnou umělou?) inteligenci (AGI?) umíme sestrojit s daleko menší počteme atomů, než kolik je jich ve vesmíru. Mám na mysli početí, porod nějaká desetiletí učení a hop: je tu Iris van Rooij, jiný z autorů článku nebo já. Takže i kdyby složitost udělat člověka rostla jakkoliv exponencielně tak dělat lidi nějak zvládneme s nějakým konečným množstvím atomů. Ale snad jsem pochopil, že toto se autoři nesnaží vyvrátit. Myslím, že se zaměřují na jeden(?) určitý(?) způsob výroby / učení, nějaká ty současná elelemka. A že nějakými výpočty ukazují, že pro nějaký úkol, co údajně moje inteligence zvládá, by současný elelemkový způsob učení potřeboval příliš mnoho atomů.
Ten úkol je, tuším, navazovat inteligentně na pátnáctiminutovo konverzaci. Mně připadá, že to je úkol, který zvládá minimum lidí. Zdá se mi, že většina lidí navazuje na posledních třicet vteřin konverzace. Na poslední tút. A už je jim všechno jasné.
Jinak ta tabulka nadepsaná:
A non-comprehensive list of different (not mutually exclusive) meanings of the word AI, including AI as idea, AI as a type of
system, AI as a field of study, and AI as institution(al unit).
Co je v článku i v Dingemansově tútu, je pěkná.
Vrátila mě k tomu tato dvoutútová konverzace:
https://scholar.social/@davidschlangen/ ... 6210731511
https://scholar.social/@dingemansemark/ ... 9599176798
Která znovu odkazuje na:
https://osf.io/preprints/psyarxiv/4cbuv
https://osf.io/4cbuv/download/
ReclAIming_AI_2023.pdf
Abstract:
The idea that human cognition is, or can be understood as, a form of computation is a useful conceptual tool for cognitive science. It was a foundational assumption during the birth of cognitive science as a multidisciplinary field, with Artificial Intelligence (AI) as one of its contributing fields. One conception of AI in this context is as a provider of computational tools (frameworks, concepts, formalisms, models, proofs, simulations, etc.) that support theory building in cognitive science. The contemporary field of AI, however, has taken the theoretical possibility of explaining human cognition as a form of computation to imply the practical feasibility of realising human(-like or -level) cognition in factual computational systems; and, the field frames this realisation as a short-term inevitability. Yet, as we formally prove herein, creating systems with human(-like or -level) cognition is intrinsically computationally intractable. This means that any factual AI systems created in the short-run are at best decoys. When we think these systems capture something deep about ourselves and our thinking, we induce distorted and impoverished images of ourselves and our cognition. In other words, AI in current practice is deteriorating our theoretical understanding of cognition rather than advancing and enhancing it. The situation could be remediated by releasing the grip of the currently dominant view on AI and by returning to the idea of AI as a theoretical tool for cognitive science. In reclaiming this older idea of AI, however, it is important not to repeat conceptual mistakes of the past (and present) that brought us to where we are today.
Vypisuju si:
We then present a formal proof that the prob-
lem that Dr. Ingenia sets out to solve is intractable (formally,
NP-hard; i.e., possible in principle but provably infeasible;
see the section Ingenia Theorem). We also unpack how and
why our proof shows that the AI-as-engineering approach is
a theoretical dead-end for cognitive science.
Box 1 — Implications of intractability
Because AI-by-Learning is intractable (formally, NP-
hard under randomized reductions), the sample-and-time
requirements grow non-polynomially (e.g. exponentially
or worse) in n. To illustrate just how quickly this would
exhaust all the resources available in the universe, even
for moderate input size n, let us do a simple thought ex-
periment: Imagine we are looking for an AI that can re-
spond appropriately to different situations corresponding
to conversations of, say, 15 minutes. Since people speak
around 160 words per minute on average (Yuan, Liber-
man, & Cieri, 2006, see also Dingemanse & Liesenfeld,
2022; Liesenfeld & Dingemanse, 2022), let us take 60
words per minute as a generous lower bound. Then a con-
versation would have on average 900 words. For humans,
the appropriate response may depend on the full context
of the conversation, and we have no problem conditioning
our behaviour in this way. To encode such sequences of
spoken words in some binary encoding, we would need
more bits than words; i.e. n > 900. The assumption of us-
ing 1 bit per word is an underestimation, assuming that at
each point, the conversation can continue grammatically
correctly in at least two directions (cf. Parberry, 1997).
Now the AI needs to learn to respond appropriately to
conversations of this size (and not just to short prompts).
Since resource requirements for AI-by-Learning grow ex-
ponentially or worse, let us take a simple exponential
function O(2n) as our proxy of the order of magnitude
of resources needed as a function of n. 2^900 ∼ 10^270 is
already unimaginably larger than the number of atoms in
the universe (∼ 10^81). Imagine us sampling this super-
astronomical space of possible situations using so-called
‘Big Data’. Even if we grant that billions of trillions (10^21)
of relevant data samples could be generated (or scraped)
and stored, then this is still but a miniscule proportion of
the order of magnitude of samples needed to solve the
learning problem for even moderate size n. It is thus no
surprise that AI companies that are trying to construct
AIs using machine learning are running out of useable
data (Shumailov et al., 2023; P. Villalobos et al., 2022)
and that actual datasets are not being scaled up to more
and more complex and diverse real-world situations and
behaviours, but they are becoming more homogeneous
(with even harmful consequences; Birhane, Prabhu, Han,
& Boddeti, 2023). That nevertheless ‘large data sets’ (in-
correctly) appeared to be sufficient for solving a problem
like AI-by-Learning, can be explained by the fact that
people generally have poor intuitions about large numbers
(Landy, Silbert, & Goldin, 2013) and underestimate how
fast exponential functions grow (van Rooij, 2018; Wage-
naar & Sagaria, 1975; Wagenaar & Timmers, 1978, 1979).
Hence, contrary to intuition, one cannot extrapolate from
the perceived current rate of progress to the conclusion
that AGI is soon to be attained.
Já se celým článkem pořád ještě neprokousal, ale už začínám tušit, co vlastně říkají. Asi je jasné, že (obecnou umělou?) inteligenci (AGI?) umíme sestrojit s daleko menší počteme atomů, než kolik je jich ve vesmíru. Mám na mysli početí, porod nějaká desetiletí učení a hop: je tu Iris van Rooij, jiný z autorů článku nebo já. Takže i kdyby složitost udělat člověka rostla jakkoliv exponencielně tak dělat lidi nějak zvládneme s nějakým konečným množstvím atomů. Ale snad jsem pochopil, že toto se autoři nesnaží vyvrátit. Myslím, že se zaměřují na jeden(?) určitý(?) způsob výroby / učení, nějaká ty současná elelemka. A že nějakými výpočty ukazují, že pro nějaký úkol, co údajně moje inteligence zvládá, by současný elelemkový způsob učení potřeboval příliš mnoho atomů.
Ten úkol je, tuším, navazovat inteligentně na pátnáctiminutovo konverzaci. Mně připadá, že to je úkol, který zvládá minimum lidí. Zdá se mi, že většina lidí navazuje na posledních třicet vteřin konverzace. Na poslední tút. A už je jim všechno jasné.
Jinak ta tabulka nadepsaná:
A non-comprehensive list of different (not mutually exclusive) meanings of the word AI, including AI as idea, AI as a type of
system, AI as a field of study, and AI as institution(al unit).
Co je v článku i v Dingemansově tútu, je pěkná.
Re: Róró
https://mastodonczech.cz/@salom/112332881630594858 :
> Proč není největší nebezpečí pro naší civilizaci AI, ale mladý osamělý muž?
>
> https://medium.seznam.cz/clanek/salom-n ... -muz-59342 :
>> https://aeon.co/essays/the-self-is-not- ... identities
> Proč není největší nebezpečí pro naší civilizaci AI, ale mladý osamělý muž?
>
> https://medium.seznam.cz/clanek/salom-n ... -muz-59342 :
>> https://aeon.co/essays/the-self-is-not- ... identities
Re: Róró
https://mas.to/@osma https://trojkatretiho.cz/viewtopic.php?p=661#p661 :
> I would rather be understood than not.
Kým mi má být rozuměno? Nějakým určitým "ty", nebo rórem?
> I would rather be understood than not.
Kým mi má být rozuměno? Nějakým určitým "ty", nebo rórem?
Re: Róró
Nějak mi přijde, že je potřeba rozlišovat trojí: příroda, člověk, róró (odkaz na obrázek v dalším příspěvku).
A není dobré nazývat róró lidskými slovy a už vůbec není dobré ho charakterizovat jako "lidské dílo", "lidský výtvor", "společnost" a podobně. Je potřeba si uvědomit, že róró je ne-lidské, i když jeho ne-lidskost vůbec nemusí být chápa tím hanlivým(?) způsobem, kdy je to slovo používáno k odsudku často velmi fuj-"lidského" chování. Zkoušet se vymknout antropocentrismu.
A není dobré nazývat róró lidskými slovy a už vůbec není dobré ho charakterizovat jako "lidské dílo", "lidský výtvor", "společnost" a podobně. Je potřeba si uvědomit, že róró je ne-lidské, i když jeho ne-lidskost vůbec nemusí být chápa tím hanlivým(?) způsobem, kdy je to slovo používáno k odsudku často velmi fuj-"lidského" chování. Zkoušet se vymknout antropocentrismu.
Re: Róró
Jak nemám rád krmení ('feeds'), tak ujíždím na broušení ('browse').
Použil jsem "svoje" unijní hledání a vyšlo mi:
https://duckduckgo.com/?q=intersubjecti ... ua)&ia=web
A hned první je pěkná prezentace:
https://hci.iwr.uni-heidelberg.de/syste ... eality.pdf
Pátral jsem dál:
https://www.uni-heidelberg.de/en?overla ... gsc.page=1
A našel toto:
https://hci.iwr.uni-heidelberg.de/teach ... stics_2019
A jako desert:
https://en.wikipedia.org/wiki/Intersubjectivity
https://cs.wikipedia.org/wiki/Intersubjektivita
Použil jsem "svoje" unijní hledání a vyšlo mi:
https://duckduckgo.com/?q=intersubjecti ... ua)&ia=web
A hned první je pěkná prezentace:
https://hci.iwr.uni-heidelberg.de/syste ... eality.pdf
Pátral jsem dál:
https://www.uni-heidelberg.de/en?overla ... gsc.page=1
A našel toto:
https://hci.iwr.uni-heidelberg.de/teach ... stics_2019
A jako desert:
https://en.wikipedia.org/wiki/Intersubjectivity
https://cs.wikipedia.org/wiki/Intersubjektivita
Kdo je online
Uživatelé prohlížející si toto fórum: Ahrefs [Bot] a 0 hostů