You've successfully subscribed to Smartcodehub ™ Blog
Great! Next, complete checkout for full access to Smartcodehub ™ Blog
Welcome back! You've successfully signed in.
Success! Your account is fully activated, you now have access to all content.
An Idea of How the Brain Works

An Idea of How the Brain Works

. 2 min read


To accomplish this, one of the most popular and promising ways is through the
use of artificial neural networks. These are loosely inspired by how our
neurons and brains work. The prevailing model about how our brains work is
by neurons receiving, processing, and sending signals (may connect with other
neurons, receive input from senses, or give an output). Although it’s not a
100% accurate understanding about the brain and neurons, this model is useful
enough for many applications.
This is the case in artificial neural networks wherein there are neurons (placed
in one or few layers usually) receiving and sending signals. Here’s a basic
illustration from TensorFlow Playground:

Notice that it started with the features (the inputs) and then they’re connected
with 2 “hidden layers” of neurons. Finally there’s an output wherein the data
was already processed iteratively to create a useful model or generalization.


In many cases how artificial neural networks (ANNs) are used is very similar
to how Supervised Learning works. In ANNs, we often take a large number of
training examples and then develop a system which allows for learning from
those said examples. During learning, our ANN automatically infers rules for
recognizing an image, text, audio or any other kind of data.


As you might have already realized, the accuracy of recognition heavily
depend on the quality and quantity of our data. After all, it’s Garbage In
Garbage Out. Artificial neural networks learn from what feed in to it. We might
still improve the accuracy and performance through means other than
improving the quality and quantity of data (such as feature selection, changing
the learning rate, and regularization).

Potential & Constraints
The idea behind artificial neural networks is actually old. But recently it has
undergone massive reemergence that many people (whether they understand it
or not) talk about it.
Why did it become popular again? It’s because of data availability and technological developments (especially massive increase in computational
power). Back then creating and implementing an ANN might be impractical in
terms of time and other resources.


But it all changed because of more data and increased computational power.
It’s very likely that you can implement an artificial neural network right in your
desktop or laptop computer. And also, behind the scenes ANNs are already
working to give you the most relevant search results, most likely products
you’ll purchase, or the most probable ads you’ll click. ANNs are also being
used to recognize the content of audio, image, and video.


Many experts say that we’re only scratching the surface and artificial neural
networks still have a lot of potential. It’s like when an experiment about
electricity (done by Michael Faraday) was performed and no one had no idea
what use would come from it. As the story goes, Faraday told that the UK
Prime Minister would soon be able to tax it. Today, almost every aspect of our
lives directly or indirectly depends on electricity.


This might also be the case with artificial neural networks and the exciting
field of Deep Learning (a subfield of machine learning that is more focused on
ANNs).