top of page
NMCS

VISUAL DEPROTECTION MODEL FOR NEURAL NETWORKS


Your brain does not manufacture thoughts. Your thoughts shape neural networks. The neural network is the kind of technology that is not an algorithm, it is a network that has weights on it, and you can adjust the weights so that it learns. You teach it through trials, like we apply trial and error method in mathematics to get certain answers. It is literally the case that learning languages makes you smarter and the neural networks in the brain strengthens as a result of language learning. There is a very inspirational quote on neural networks. It goes like - "I think the brain is essentially a computer and consciousness is like a computer program. It will cease to run when the computer is turned off. Theoretically, it could be re-created on a neural network, but that would be very difficult, as it would require all one's memories." ~ Stephen Hawking Neural Networks is a technique for building computer software that learns from data. We need to feed our brain to accept certain things. It is based very loosely on how we think the human brain works. First, a collection of software “neurons” are created and connected together, which allows them to send messages to each other. Next, the network is said to solve a problem, which it strives to do over and over, each time strengthening the bonds that lead to success and avoiding those that lead to failure. There are certain kind of colors which implies certain meaning to each one of them. What Do All the Colors Mean? Let’s explore it. Orange and blue are used throughout the visualization in a bit different way, but in general orange shows negative values while blue shows positive values. The data points (represented by small circles) are initially colored orange or blue, which correspond to positive and negative. In the hidden layers, the lines are colored by the weights of the connections between neurons. Blue shows a positive weight, which means the network is using that output of the neuron as given. An orange line shows that the network is assigning a negative weight. In the output layer, the dots are colored orange or blue depending on their original values. The respectively background color shows what the network is predicting for a particular area. The intensity of the color shows how confident the prediction is. Neural networks have become the existing standard for image-related tasks in computing, currently being expanded in a multitude of scenarios, ranging from automatically tagging photos in your image library to autonomous driving systems. These machine-learned systems have become universal because they perform more accurately than any system humans were able to directly design without machine learning. But because essential details of these systems are learned during the automated training process, understanding how a network goes about its given task can sometimes remain a bit of a mystery. Why state-of-the-art Deep Neural Networks (DNN) can still recognize scrambled images perfectly well and how this helps to uncover a puzzlingly simple strategy that DNNs seem to use to classify natural images. These findings, published at ICLR 2019, have a number of complications: first, they show that solving ImageNet is much simpler than many have thought. Second, the findings allow us to build much more interpretable and transparent image classification pipelines. Third, they explain a number of phenomena observed in modern CNNs like their bias towards texture and their neglect of the spatial ordering of object parts. In the old days, before Deep Learning, object recognition in natural images used to be fairly simple: define a set of key visual features (“words”), recognize how often each visual feature is present in an image (“bag”) and then classify the image based on these numbers. "Daydreaming defeats practice; those of us who browse TV while working out will never reach the top ranks. Paying full attention seems to boost the mind's processing speed, strengthen synaptic connections, and expand or create neural networks for what we are practicing." ~ Daniel Goleman.

65 views0 comments

Recent Posts

See All

Comments


Bombilla.png
bottom of page