Robots downloading Skills (Like in the Matrix!)

Robots downloading Skills (Like in the Matrix!)

Machine Learning Street Talk

2 дня назад

5,524 Просмотров

Ссылки и html тэги не поддерживаются


Комментарии:

@diga4696
@diga4696 - 20.06.2025 02:58

Whoa...hivemind.. we are all thinking the same thing.

Ответить
@burnytech
@burnytech - 20.06.2025 03:13

more pure gold <3

Ответить
@earleyelisha
@earleyelisha - 20.06.2025 03:18

Can’t wait to hear what algorithms they are using to tackle this!

Ответить
@BlaQsheeeP
@BlaQsheeeP - 20.06.2025 04:46

Damn you, I have work to get done - now I need to listen to this first.

Ответить
@BradBmad
@BradBmad - 20.06.2025 05:12

This is going to be great

Ответить
@SandipChitale
@SandipChitale - 20.06.2025 05:14

Please have Prof. Elan Barenholtz in relation to language being representation of representation.

Ответить
@SandipChitale
@SandipChitale - 20.06.2025 05:17

Please have Carolina Parada from Google about how they are using LLMs ans SLMs with robotics systems at Google and the results they are getting.

Ответить
@nodistincticon
@nodistincticon - 20.06.2025 05:41

Awesome

Ответить
@AGIAchievedwithNMSRNs
@AGIAchievedwithNMSRNs - 20.06.2025 06:22

Billions, A., & Knight, C. (2025). Neural-Matrix Synaptic Resonance Network(s) (NM-SRN) v2.0 Confirmed LEVEL-3 Artificial General Intelligence (AGI) Achieves Breakthrough in Fusion Energy Optimization. Zenodo. 15701912

Ответить
@isajoha9962
@isajoha9962 - 20.06.2025 06:27

Kind of like translation of External Context and movement, consequence, push, pull, grab, "feel"", lift and turn objects etc within it? 🤔 Language vs Understanding. Including understanding self and understand not self within every Context. Hmm, crawling reality? Great topic in the video. Summary: replacing a human user with reality to drive the AI forward in their actions (eg getting from here to there).

Ответить
@snarkyboojum
@snarkyboojum - 20.06.2025 07:04

The science of can and can't aka constructor theory has an interesting take on emergence. Would be cool to connect some of this stuff with Marletto and Deutsch...

Ответить
@snarkyboojum
@snarkyboojum - 20.06.2025 07:32

"collections of models" reminds me of the 1000 brains project ;)

Ответить
@andrewlewin6525
@andrewlewin6525 - 20.06.2025 07:42

This channel has some really interesting interviews, it must be a trip to get to speak to all these great minds

Ответить
@2FAyes
@2FAyes - 20.06.2025 07:45

w

Ответить
@ButtNakedBaller7
@ButtNakedBaller7 - 20.06.2025 08:15

If we’re arguing that LLMs only interact with data representations of the world rather than the physical world itself, how can we claim the human experience is any different?

Our perception of the physical world comes from our “sensors” (eyes, ears, etc) sending data (vision, audio). We cannot possibly validate that the perception we build with this data is true to the physical world. Our only basis of “physical world” is the data representation our minds build.

Just because the data is formatted differently for NNs doesn’t mean its representation of the world is any less valid than ours.

Ответить
@fburton8
@fburton8 - 20.06.2025 10:07

“We’ve exhausted all of the static datasets” Is that really true? Have LLMs read every published book and paper, every scanned document, watched every movie and video, studied every photo and map, listened to every recording? That said, I have long believed that true intelligence requires embodiment, so I’m jazzed by this interview.

Ответить
@NextLevelPls
@NextLevelPls - 20.06.2025 11:08

I'm adding real stuff to robots lol I'm into the level that developers don't even get to touch lol

Ответить
@ЖизньТакая-б8ы
@ЖизньТакая-б8ы - 20.06.2025 11:50

Nidza Chad

Ответить
@-1-cr5kg
@-1-cr5kg - 20.06.2025 11:51

Kept bumping into comments about Nixorus books, everyone saying theyre dangerously honest and almost banned-level info. Eventually, I caved and checked it out. Theyre right—this stuff hits different. Its weirdly addictive, probably because it feels like knowledge youre not supposed to find.

Ответить
@helge666
@helge666 - 20.06.2025 13:07

Nitpick about the title: You can't Download something TO a target. Instead, you Upload something to a target. Alternatively, you could say "Robots downloading skills" which would be correct if the Robot initiated the transfer to itself. I forgave Capt. Janeway confusing the terms in 1996, but you guys should know better....

Ответить
@alexandermoody1946
@alexandermoody1946 - 20.06.2025 14:16

So much emphasis has been placed on displacing populations of humans from the productive component of our whole civilisation and that there is acknowledgement that human derived or created data has already been used.

Whilst I accept that embodiment of intelligent machines in physical space has great value I believe that the point is often cast aside that humans can still also be data productive as populations especially if the data is not captured in parasitic behaviour by those that benefit most from population displacements in the working economy. The truth is very plain data has already great value as a currency for machine learning. The solution is clearly recognising appropriate market structures for created data as a new basis for economic value creation. I know investments have been made to replace humans but we are able to create such diversity of data from many different fields but if quality of data is to be secured there needs to be compensation for data creation in the same way as physical goods and services are part of the economy. There is no legitimate arguement that data shouldbe free to take because the dysfunction between humans and robots having a shared future will be highly unlikely if humans are not allowed or meaningfully facilitated in participation.

 In many ways rather than inference we have a duty now for machine learning to succeed and that involves the recognition of data creation as an input and not only expectant that inference is the only interaction.


For this to really be the answer a new type of block chain that acts as a permanent ledger that records the created data and all future interactions then the data that is inputted into each model can be fully traced, targeted and certification can be granted for suitability and licences can support the well being of individuals that created the data.

Ответить
@DomainManaging
@DomainManaging - 20.06.2025 15:52

vibe code organisations will begin 🚀

🎉

Ответить
@BenC-q4y
@BenC-q4y - 20.06.2025 17:21

idk, in a sense couldn't you make the argument that we DO live in a variant of Plato's cave? We don't experience the real world, we experience a model of the real world that our brain has built. Information comes in and is converted into spike trains etc for example color doesn't truly exist in the raw universe but is more a construct of our internal model, though maybe that's taking the Plato's cave analogy too far. I don't disagree with more physical AI, but I also feel there is room for improvement in algorithms and architectures, be it embodied ai or something else. Continuous learning also feels important, which is maybe part of his argument.

Ответить
@Art-AI-and-beyond
@Art-AI-and-beyond - 20.06.2025 17:47

exciting stuff, will keep an eye on these guys. Definity feels like a good path to take for a properly adaptive self improving system. Something that can be both tailored to the user and also fed back to the community when privacy is not an issue.

Ответить
@dabunnisher29
@dabunnisher29 - 20.06.2025 19:47

The most ironic part about this is all this talk, talk, talk about PHYSICAL AI, but they have no PYHSICAL AI work to show. If you are going to do all this theory and talk about PHYSICAL AI, have something to show us. Show me the PHYSICAL ROBOTS you are working with. I went to their website. NOTHING PHYSICAL. This talk should have been called THEORY AI.

Ответить
@Lazy_Dynamics
@Lazy_Dynamics - 20.06.2025 21:25

Great, as usual! Thank you, Tim!

Ответить
@flyagaric23
@flyagaric23 - 20.06.2025 21:27

Oof, let the quality and clarity shine on.

Ответить
@stefl14
@stefl14 - 20.06.2025 23:38

The truth is somewhere in the middle. I used to think LLMs can't get semantics from syntax. But that looks less and less tenable with time:

1. Relatively small LLMs (~7bn params) trained only on text could caption images in 2023 with no fine-tuning except for a linear map between image and word tokens. A caption requires a holistic understanding understanding of the interrelatinships between tokens, so it's surprising that a linear map is up to the task (not a slam dunk because the presence of a "jumping dog" token amidst "sky, sky, sky, cloud, ball in air,...] makes "dog chases ball" somewhat inferrable, but it's still true that the LLM wasn't trained on such orderings).
2. On compositionality issue, LLMs display a lot more of that than we thought they would a few years ago if we're being intellectually honest. Image generators can now generate stuff like "orange pig on purple unicycle doing a handstand in front of the london houses of pariament" in one shot (I tried this when 4o imagen came out). Nobody serious argues that was in the training data. I think the lesson is that interpolation in a high dimensional space is extrapolation, at least to some degree.

None of this should be that surprising in hindsight. After all, we are a finite state automaton too!

All that being said, I'm an FEP guy. Feedforward nets trained with backprop probably can't get as sparse or "compositional" as our brains, and certainly not as efficiently. Intereted to see if active inference approaches can overcome catastrophic forgetting and such, as the formalism allows. Unfortunately, that will likely take hardware innovation.

Ответить
@75M
@75M - 21.06.2025 01:06

Thanks Tim for this great interview again, Can't wait to listen to it again on spotify but watching it is also great, quality of the production is magnificent!

Ответить
@Aaron-tl9zy
@Aaron-tl9zy - 22.06.2025 03:22

isnt it kinda crazy that we live in a world where we even have to distinguish sometimes between the real-world, and the other, "digital world"

Ответить