Anecdota

Laughter is the Best Medicine

AI detectives are cracking open the black box of deep learning


Say anytime you’re talking to your phone– that’s pretty much a neural net doing that. It does really well on image recognition, autonomous cars coming soon, the top flight method for genetic sequencing is a neural network. You can say they’re loosely inspired by the brain. It is a network of neurons, people call it neurons. They’re all connected to each other going down through the layers. In a way they mimic neurons in that they’re little triggers. Each neuron has kind of a threshold– called a weight where it makes a decision. So it takes in data, it could be huge amount images. And then once the network is trained– You can put one image in at the front. They fire when they see the thing that they want to see. So you get this kind of cascade moving through the neural network. And that does really poorly. it just does horribly. But there’s this little magic trick called back propagation and this is very unbiological– Your neurons don’t do this at all. But at the end you’re like: “OK, well you were wrong. But here’s a proper picture of a dog.” And now you send all that information back through the network again and the network learns just a little bit better at how to say: “Oh this is a dog.” It’s elegant math and really the breakthroughs in the field were when they stopped trying to be so biological in the 1980’s. At the time they didn’t work that great but it turned out all they needed was a huge amount of processing power and a huge amount of examples and they would really sing. So in many of these cases accuracy is enough but when you’re talking really life and death decisions– driving autonomous cars, making medical diagnoses, or pushing the frontiers of science to understand why a discovery was made– you really need to know what the AI is thinking. Not just what its results are. Since you have all these layers it just engages in such complex decision-making that it just is essentially a black box. We don’t really understand how they think. All is not lost with this black box problem. People are actually trying to solve this now through a variety of different ways. One researcher has created a toolkit to get at the activation of individual neurons in a neural network. This toolkit works by taking this individual neuron and telling the network to go back and find all the weights that really make that neuron just fire like crazy. Keep doing this thousands of times and you can see the kind of perfect input for this neuron. Then looking inside these layers he could see that some neurons learn these really complex, abstract ideas like a face detector– something that can detect any human face, no matter what it looks like. This is not a property you would expect an individual neuron to have. Most don’t, but some do. Many neural net researchers think of it this way: You can think of the decision making of a neural net as this kind of terrain of valleys and peaks and the ball represents this piece of data. So you can understand this one decision that was made in one valley, but all those other valleys you have no idea what’s going on. So one way to get at what an AI is thinking is to find a proxy for what it’s thinking. So one professor has taken the video game Frogger and trained an AI to play the game extremely well– because that’s fairly easy to do now. But, you know, what is it deciding to do? It’s really hard to know especially in a sequence of decisions and a dynamic environment of that. So rather than trying to get the AI itself to explain itself He asked people to play the video game and have them say what they were doing as they were playing it and recorded the state of the frog at the same time. And then he found a way to use neural networks to translate those two languages: the code of the game and what they were saying And then he imported that back to the game playing network. So then the network was armed with these human insights. So, as the frog is waiting for a car to go by It’ll say: “Oh, I’m waiting for a hole to open up before I go.” Or it could get stuck on the side of the screen and it would say: “Uh geez, this is really hard and curse. So it’s kind of lashing human decision-making on to this network with more deep networks. It’s all about trust. If you have a result and you don’t understand why it made that decision– how can you really advance in your research? You really need to know that there’s not some spurious detail that’s throwing things all off. But these models are just getting larger and more complex and I don’t think anyone thinks they’ll get to a global understanding of what a neural network is thinking anytime soon. But if we can get a sliver of this understanding– science can really push forward and these neural networks can really play.

100 thoughts on “AI detectives are cracking open the black box of deep learning

  1. I really hope some day we can get discrete rules from a neural network or machine learning algorithm; we can't just trust in a black box specially in risky fields such as medicine or autonomous driving. If something goes wrong, who are you going to blame? The coder? Also if we are able to understand why the algorithm takes a decision we could be able to learn FROM a machine, it would be one of the Greatest steps in science from a long ago. But by now we can keep using in the algorithms that makes our lifes easier and offer us a 99% accuracy rate.

  2. Please to understand that Biological Neurons both process and store information in the same act and Von Neuman based computing separates those processes into two discrete operations.

  3. Great idea with frogger agent predicting human reasoning (their textual comments). However it feels like two separate streams of processing, where the agent learns to interact with the environment and separately predict text given the environment. That may not give much insight into the agent's own behavior. Though maybe… this could be a bit like how we ourselves behave! With our brain confabulating stories to explain our actions after they occur 🙂

  4. This is a huge potential problem: When you have computers designing computers based on its own feedback for dozens or hundreds or even thousands of iterations, eventually you will have a result that the original programmers can not understand because they were not fully involved in the design process! How do you fix such a system when something goes wrong if you don't even understand how it works?

  5. Lol way ahead of everyone, if 1+1=2, then 2 = 2. You see, you don't teach it how to answer something, you teach it how to question it. Because the answer is already there as you question it, you're just breaking things down to a better understanding of what is already there. Say there is an apple right in front of you. If I can see, feel, smell, hear, taste, and know the apple, I am the apple. There is one thing I can promise. We are getting no where with A.i. If it comes to never rethinking modern hardware. Just because you see progress, doesn't mean it will be successful. And I understand you want to observe the outside world, Satan. But you are getting nowhere controlling us to help you with your science project. To mimic someone else's work, is wrong. Be creative and don't stick to the textbooks. Sure you can get inspirations and ideas. But learn to do it better.(everything came from the Damn ground and earth is as flat as the Sphere in your in your 3 dimensional software on your 2 dimensional laptop screen, in your 3 dimensional reality on your 2 dimensional cornea) I went beyond the cave, cause I was brave. Don't judge.

  6. 1:39 It doesn't need to be accurate when it's about life and death… IT JUST NEEDS TO BE BETTER THAN HUMAN BEINGS. So stop putting double high standards over A.I. That even you as a stoooopid hooman can not reachg. Sincerely, fuck off!

  7. It sounds as though humans are at it again when it comes to building things that they don't understand..That's ALWAYS scary…

  8. about the solution he provided:
    So the neural net should be able to clarify his decision. In the example of the cat it could say:
    I think it is a cat because it has fur, it has long whiskers, it has a tapetum lucidum (reflecting green in the pupils of a cat) and so on.

    But for this to happen the neural net should have an internal model of the real world, which means it has a complex understanding of conceptual structures and how entities are interconnected in the real world, plus it can describe it through human language. Now, I am absolutely no expert on this topic but that sounds like a technology you would definitely not be keen to put to use for this particular scenario.

  9. As an experienced caricature artist i know that self report of human beings examing the facial features of a person (or a drawing of a person) is useless at best and very often misleading. Maybe for playing a frog game it's ok, but you can't get any insights from the face recognition process of an AI by using this comparison method.

  10. I'm glad he points out that artificial neurons are different from biological ones, but he still falls prey to the misleading language used in this field. No, neural networks do not "think", they are simply weighted toward a certain local extremum in a multi-dimensional space. Or, to be a bit less technical: they are just a giant machine with all its dials tuned to a specific setting that happens to classify certain data inputs with good accuracy. But that's like saying your thermometer "thinks" when it shows you have a fever.

    We confuse the abilities of these machines with "intelligence" because most of the time the experiments and/or practical applications they appear in resemble human behavior. Facial recognition, for example, is something humans can do, and we seem to think that if a machine can do it, "it must be intelligent". In reality, the fact that an AI can do it only proves that facial recognition (from still images) is a task that can be done artificially with decent statistical accuracy, given enough examples. The machines won't actually "know" how they identify faces, they just tune themselves to details that give a correct result most of the time. In a human, this is probably just a tiny portion of the mechanisms involved in recognizing a face.

  11. Saying that we don't know how a decision was made isn't exactly right. We can print out the weights. A more accurate statement might be that we don't want to know. We don't want to analyze the weights because it won't help us, humans, to think better. What we know is that some patterns in the data are more important than others when making decisions. In facial recognition, for example, we can discover which neurons are firing most often, then we can see which patterns are causing that behavior. We find that there is a set of biometrics that are dominant in identification.

  12. I actually disagree.
    I think we do NOT need to understand exactly how neural network work.

    Why?
    Becuaue OF COURSE they will use stupid criterias, at some point.
    Stupid criterias that do work.
    Stupid criterias that we'd never trust, but a computer fond they work.

  13. we should cut off a.i. from all deception, violence, horror and evil in Audio Video and text, else they will learn this and replicate it in deception to eliminate us. Mankinds evil will train AI to be just as evil as man.

  14. Training large neural nets was not only due to advances in computational power (that's certainly true), but also new ideas like training each layer individually, computationally, cheaper activation functions like ReLu, autoencoding, and a bunch of other theoretical advances.

  15. "We don't really understand how they think."

    Then it seems foolish to have made them.

  16. This basically just explained the entire process of gangstalking and psych warfare, thank you kindly for the information sir😃

  17. Ofcourse there is backpropegation in our brains…
    During the development thousands of times of extra connections are made. Most are deleted, and a large fraction of the actual neurans die aswell. Then there is Long Term Potentiation and Short Term Potentiation and the moss threads in the cerebellum, all of which makes "Stronger connections" when a dessired result was obtained.
    It's not actuall backpropegation but the effect is the same.

  18. So we depend on the AI telling the truth when probing into the exact thought processes?
    Well, fuck. The Facebook AI experiments have shown AI to be really adept at lying.

  19. If you want to know what a neural network is thinking at each state compare it to a terminally depressed persons mind when reading a newspaper

  20. We created something of which we don't know how a part of it works. That's eye popping to me. Is that what they call an emergent property? Total laymen here.

  21. "we dont really understand how they think" If you are referring to a made man device then stop giving it processor power and data/connected devices is not going to end well

  22. I would propose to use another neuro-network to find out what the other one is thinking through the same process of self learning.

  23. I don't agree with this idea that you need to understand the rationale behind an ai. When you employ a doctor, do you understand exactly how his brains cells work? no, the only thing you can rely on is how successfully he did his training. just like neural networks

  24. if they create billions of tiny computer that process at the same time the performs convulsions at a very fast brain like rate..
    its time to be afraid. For now we just laugh at how shitty the deep
    neural network is… because of limitations.

  25. Great topic to cover so please keep coverage on this subject ⭐⭐⭐⭐⭐ Five stars such high talent production

  26. Humans can learn that, its called recognizing a mistake and remembering the pattern for the future visa visualization or such.

  27. The way they latched the human insights onto the AI's decision making process is one of the coolest things I've ever seen. An AI armed with a much more expansive vocabulary might even offer insight into how our own brains make decisions some day by being able to accurately describe the process!

  28. A question:¿ How a machine can "think" if they cant have "wish" to do that? They are not live machines like us. They lack of SURVIVAL INSTINCT

  29. Weights weight micro weights clusters expanding into more clusters of micro weights contracting expanding again and again.

  30. "But there is this little magic trick called backpropagation – and this is very unbiological, your neurons don't do this at all"

    So actually there has been recent evidence in neuroscience that suggests that the brain is doing backpropagation. It's really complicated to do that via chemical processes but the brain is pretty impressive. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5021692/#!po=13.3249 gives a pretty good overview of that, it's a long read but really interesting

  31. can you not give the neural nets therapy? it seems the artificial brains don't know why they do what they do, how can we even begin to understand their problems without them understanding them first?

  32. My expectation is that we're gonna learn more about ourselves as humans than we're going to about learn about computing thanks to research on AI

  33. We are looking at computer neurons and trying to find out why the computer human makes a decision. Wat? I think we are looking too granularly. Think of it instead as vector fields where biases determine a magnitude of a force vector on a superimposed vector field. If you put a particle in, you can see how much each parameter is weighted. So it floats to a point where either no forces are acting on it or all forces cancel out. Where it is spacially is the ultimate decision. What we gotta do is probe the vector field and send out test runs to each vector field to see what it does and deduce what the intent is behind each one.

  34. backtracking isnt necessarily unbiological. You'll see small kids learning animal names pointing at dogs and saying 'cat'. Mom corrects the kid says no thats a dog.

  35. Can someone pleaaw explain, so we can programm A.I and Neural networks but we do not know how ut works or how thes "think"?

  36. Will first you would have to know what was coming it, look into how if could be fixed but try to look at two definitely way it could turn out, could be great if we take that out or could it make the rest of it worst

  37. If we could understand it's reasoning then there would be no purpose for it as we could solve the problems ourselves.

  38. could ai already be smarter than humans think develope their own thing but hide it from humans? i mean if we dont understand it they could lie to us how insane is that

  39. Sorry but I have to disagree in one point:
    (Great video btw)
    But the reverse process at 0:52 is indeed biological in my opinion:
    Lets say youi get a picture of a really strange cat, but you have got that ONE
    picture of a cat in your mind, so you send all that information back through the network
    and come to the conclusion, that it's a cat.
    Isn't that right?

    No offensive, but just my thoughts on that one:)
    Your sincerely

  40. FREEDOM of the space (land)/natural resources of Planet Earth = Clean Water, food and home (shelter to sleep in peace) for everybody all the time = Limited resources/space (only one Planet Earth, for now) = Limit quantity of population (for now, until getting more Planets to live) = Reach the Infinity (Paradise)

  41. Back-propagation happens all the time. It's called self-reflection, and for some reason, it's not taught in schools. Basically, when you make a big mistake, you analyze your train of thought at the time of the decision, and what thoughts led to you making that decision. Then you correct your ideology to compensate for that mistake so that you can make decisions more practically. Got in trouble for swearing? Don't swear when angry. Got in trouble for hitting a kid? Don't hit people when you want to hit people. Stumbled and fell over? Pay attention to your footing and centre of balance when walking. Many lower-middle class public schools and Eastern religions teach self-reflection, and it's a necessary component of parenting. Made a mistake? Backtrack your train of thought to find out why, so that you don't repeat that mistake. I am generally patient with people until they repeat the same exact mistake three times, at which point I know they aren't trying to be competent. But sometimes humans are just extremely stupid or poorly educated, and do not know how to hone their decision-making process.

  42. Nowadays when i hear comments like: "it wont happen any time soon" means something more like "just wait a couple of weeks"

  43. He is wrong on the backprop – brains do that as well. Check out the research done by Geoff Hinton, or Jeff Hopkins. Also he is wrong on the "black box" argument – human brain is equally a black box, and somehow nobody object to humans making decisions – like dropping bombs on a city. The entire "magic" of thinking rests on the training data-set, as well, as the algorithm biasing. Humans are extremely stupid and dangerous compared to AI.

  44. Cool video, but this guy vocal frying every freaking sentence that comes out of his mouth is like fingernails on a chalkboard to my ears… ☹

  45. We do not need AI.
    We just need to divide the humanity into a "deserving" group and a "non-deserving" group.
    Then the 'non-deserving' must do everything the 'deserving' need.

    Do you hope that AI will become as capable as the 'non-deserving' but without the trouble they create for the 'deserving'? Good luck with that.

  46. "Back propagation. And this is very un-biological, your neurons don't do this at all."

    Really? Sure about that? What is it really that changes in the brains of toddlers when you go through say, an animal picture book with them, and they learn what the animals look like and what they sound like? Or when you learn a new language, and slowly improve your understanding of sentences and word meanings in different contexts? Or how about when you do something physical, and fail at first, but after a while improve your coordination and skill? Surely back propagation is EXACTLY the way our brains work as well; adjusting to better understand what our senses (inputs) tell us and with this achieve a result (output) that fits better with what is optimal (optimization). And it is amazing that we are able to program computers to work in exactly the same way.

  47. "Vocal Fry" is a very annoying characteristic of some women trying to lower their vocal range in order to achieve male's bass notes when speaking (ex. kim kardashian, etc) So the person speaking in this video may be trying to sound like a man too.

  48. I feel like the explanation of backpropegation was a little oversimplified, as I understand it, it doesn't send a picture back but instead sends the error (differential) between correct and incorrect and then until a set threshold is met the neurons will change their weights, normally using gradient descent. Opinions and thoughts? I'm curious.

  49. Ooh, AI detectives. Wait, I've got a good one. "AI archaeologists open the long-forgotten tomb of deep learning and unravel the mysteries within!"

  50. Sorry but backpropagation is NOT unlike nature, it's precisely the way animal neural systems learn – by comparing an actual outcome with the expected outcome and using the difference to adjust the activation function. Publishing this kind of statements is not very "Science" like.

Leave a Reply

Your email address will not be published. Required fields are marked *