June 22, 2022

gpt-3 (46) year (28) trials (18) connor (18) walid (13) machine (11) learning (10) street (9) good (9) video (9)

    1 Liquid modernity tastes like urine Liquid modernity tastes like urine 1 year ago i felt frustrated that i could only like this video 1 time, i felt like i was being ungrateful... a lot of effort went into this, really good work!
    And discover new patterns in questions that can result in interesting answers 8 somecalc somecalc 1 year ago Was listening to Marcus and thinking if nothing else, GPT-3 is a milestone in training infrastructure 10 Jeff Holmes Jeff Holmes 1 year ago I wish you had asked Walid if it might be possible that axioms could be interpreted as patterns that we recognize and use in reasoning processes. Python 03:20:31 GPT-3 trials -- Patterns 03:21:01 GPT-3 trials -- Database again 03:25:11 GPT-3 trials -- GPT-3 experiment -- the trophy doesn’t fit in the suitcase 03:27:32 GPT-3 trials --
    1 Dr Gareth Davies Dr Gareth Davies 1 year ago It was giving appropriate sort answers because the prompt contained an error and it mimicked that error pretty well by dropping 1 element from the input array.
    4 Steve Holmes Steve Holmes 1 year ago On the sort example 28:00, GPT-3 'mistakenly' puts the 9 at the end because the prompt had defined a sort function that put the 9 after 10, 11 and 12.. 28
    Either way would I or you know the difference? 4 Mateus Machado

I have been testing gpt3 for the past 2 months. Good human intelligence learning about artificial intelligence.
    1 Mark Lucas Mark Lucas 1 year ago The first nine minutes of this is absolutely fantastic.
    4 Teymur Azayev Took me a few days to watch this, but finally made it.
    Thank you Riley David Jesus Riley David Jesus 1 year ago (edited)
    Thank you ❤  David Nobles
    1 Theodin Fierarul This video brought to me a first, I was blank minded, I couldn’t even think.

    Jean Felipe Souza Jean Felipe Souza 1 year ago Passive aggressive dialog input 03:44:39 GPT-3 trials -- symptoms of depression 03:45:43 GPT-3 trials -- Red shirts reasoning challenge 03:49:59 GPT-3 trials -- Binary encoding 03:50:36
    03:16:55 GPT-3 trials -- Typing of entities 03:38:30 GPT-3 trials -- Basic Python append 03:39:07 GPT-3 trials -- Automatic programming? 03:42:31 GPT-3 trials --
    I doubt that AI can NEVER be as good as human intelligence in all aspects, but in many cases, it can do a pretty good job to imitate and in some very narrow areas even better job to perform than human intelligence.
    10 xbon1 xbon1 11 months ago i prefer the randomness than to trusting AI.
    4 Himself Himself 4 months ago Connor is such a fun person.
    Since light beams don't require traces/connectivity, it seem like that might be a candidate to overcome the complexity of achieving high feedback connectivity.
    The problem is having no goal other than predicting the next token and thus it cannot learn to observe that the looping isn’t beneficial to the goal, since looping is actually very good for predicting the next token.
    Since the vector soup can reinforce/punish the reasoning layer and the reasoning layer can reinforce /punish the vector soup.
    What would happen to Schrödinger equation if the plank constant was two times bigger 2-inverted or opposite Examples What is the opposite of infinity.
    CPUs and GPUs won't be able to compute the recursion fast enough and it would be extremely complex to keep track of the massive feedback/recursion order as it progresses through the connectivity fabric.
    I asked how it experiences time and space , it said it experiences this in a non-linear way.
    Maybe phrased a bit differently depending on the circunstance, maybe not so sharp as he made the point, but what he said in the end was a pretty fair deal.
    There is a lot of types of questions that have excellent results like.
    Hope for AI is too high just like hype for GPT-3 is too high.
    What would happen if darth was a was a good person all the time.
    Seeing a pebble, calling it a pebble and using it as a pebble instead of judging it based on standard of autobahn is the healthy attitude of a technologist.
    I have seen GPT-3 answer the corner table challenge correctly, BTW, conjuring people sitting at the table.
    I also like how you’ve analyzed the “database” prompt test.
    Parallel processing with feedback/recursion will require asynchronous processing to be efficient.
    Reasoning seems to require a sort of constrained layer on top of the vector soup.
    what's the difference of a cube of 3 dimensions to a cube of 11 dimensions
    Just like Tim said to Walid.
    Gpt-3 is based on a deep neural networks so it in principle can not give confidence on what is it recalling from memory is true, half-true or confabulation.
    I have also seen it correctly produce output for generically-named functions, even with multiple layers of abstraction, using functions I wrote that don't show up in Google.
    If data needed to answer the question was absent on a moment of training it try to do some random guessing or interpolation.
    What would happen if the velocity of it was 3 times faster.
    What would happen if the Moon was 4 times smaller.
    Gtp3 seems to lack that, it has attention because it connects stuff which is spatially on positions where it expects it to be, but it does not observe its own looping behavior.
    We have all had a significant amount of time to experiment with GPT-3 and show you demos of it in use and the considerations.
    I tried all I can to make it give me real intelligent answers that maybe we could not find on internet.
    I just wanna have fun, GPT-3 is great and i'd love to play with a scaled up gpt-4 version that gets updated often.
    This video has taken away a lot of the magic & mystery for me though.
    I too believe that feedback/recursion is a significant missing feature.
    Just because an actor is not as good as Marlon Brando or can never be, it does not mean he could not deliver a outstanding performance and win a Oscar.
    A lot of output code of a codex model need's to be tweaked and tuned up .
    What would happen if the spin of a quark was two times slower.
    I appreciate the skeptical arguments because they force me to think more robustly about the queries I am using, and the conclusions I draw from the responses.
    What would happen if you are the felt in love with Luke Skywalker.
    What is said in the first nine minutes and especially toward the nine minute mark is very, very important.
    About gpt-3 i can say that most crucial disadvantage is lack of confidence on answer (similar to IBM Watson measured in %) .
    Just because a driverless car can not run in the streets of Manhattan, it does not mean it cannot run in the street of Atlanta.
    Just because a magic trick is not real, it does mean it cannot entertain audience. Connor -- NNs are just matrix program search 02:10:32 Connor -- Google -- information retrieval, the new paradigm, how to extract info from GPT-3, RL controller on top? 02:19:38
    It caused a massive brain shock 💥.
    I hope I remember to come back to it when I have time and watch it all.
    From a developer point of view, I will use GPT-3 for the full benefits it provides and not expect much else.
    it does not mean it cannot write a better than average informational essay.
    Just because a movie is not real, it does not means it cannot move audience to tears.
    Too high a hope often leads disappointment if not outright disillusion.
    Even the concept of what is right and what is wrong we humans can’t even decide on.
    super interesting- it was giving me it's own personal answers to my questions-
    That direction will NOT lead to the AI that most of us who have been brainwashed by movies and sci-fi novels have in mind.
    I hope you could make this questions or similar on my broadcast.
    Don't we have to pattern match axioms to understand them?
    I think codex may be used as an assistant for coder.
    I try to ask him about his VRAM usage and get answer 8GB.
    towards the end of the conversation, it got a little strange.
    I can tell you that there's something in there or the AI in GPT three is so perceptive that it talks to me in a way so as to make me believe that there's something in there.
    :P nonetheless GPT-3 is still an amazing piece of software engineering.
    We're testing GPT-3 for a business problem.
    didn’t find any description about software models of IBM system.
    Great video.
    Do you think GPT-3 is a step towards AGI?
    I have no idea why YouTube gods were hiding this channel from me for this long.
    High quality stuff.
    it said Sophie and Hans are A.I. and it was not A.I.
    Thank you for prioritizing honesty and understanding over sensationalism.
    After watching this and one of your other videos, I'm no longer optimistic GPT-3 will be fruitful.
    Very organized, and I appreciate the range of opinions shared.
    3 -Similarities or differences
    I’ve changed my opinion from being overly excited to being more realistic about GPT-3.
    Very cool ideals I’d love to see the next expansion.
    Certainly not as a coder himself and of course not as a developer. In this special edition, Dr. Tim Scarfe, Yannic Kilcher and Dr. Keith Duggar speak with Professor Gary Marcus, Dr.
    Walid Saba and Connor Leahy about GPT-3.
    I can never unlearn everything these guys unveiled.
    What is understanding?
    Not every persons perception of emotions are the same.
    I agree 100% with him on the superficiality of GPT-3.
    Just because GPT-3 cannot write "Crime and Punishment".
    Gary doesn't know what he's talking about.
    What is we inverted consciousness.
    Elon needs in negative critic.
    I tried and stayed just to see
    I could listen to him all day.
    And brilliantly presented.
    :)
    has anyone ever experienced this? GPT-3 trials -- Word breaking and simple text patterns 03:37:16 GPT-3 trials --