Javascript required
Skip to content Skip to sidebar Skip to footer

Technology That Can Be Used for Online Learning

These terms are often used interchangeably, only what are the differences that make them each a unique technology?

Technology is becoming more embedded in our daily lives by the minute, and in order to continue up with the footstep of consumer expectations, companies are more heavily relying on learning algorithms to make things easier. You can see its application in social media (through object recognition in photos) or in talking direct to devices (similar Alexa or Siri).

These technologies are unremarkably associated with artificial intelligence, machine learning, deep learning, and neural networks, and while they do all play a function, these terms tend to be used interchangeably in chat, leading to some defoliation around the nuances between them. Hopefully, we tin utilize this weblog mail to clarify some of the ambiguity here.

How exercise artificial intelligence, machine learning, neural networks, and deep learning relate?

Maybe the easiest mode to call up about artificial intelligence, auto learning, neural networks, and deep learning is to recall of them similar Russian nesting dolls. Each is substantially a component of the prior term.

Russian Nesting Dolls Analogy for AI, ML, DL and Neural Networks

That is, machine learning is a subfield of artificial intelligence. Deep learning is a subfield of machine learning, and neural networks brand up the backbone of deep learning algorithms. In fact, it is the number of node layers, or depth, of neural networks that distinguishes a unmarried neural network from a deep learning algorithm, which must have more than than three.

What is a neural network?

Neural networks—and more specifically, artificial neural networks (ANNs)—mimic the human brain through a set of algorithms. At a basic level, a neural network is comprised of four main components: inputs, weights, a bias or threshold, and an output. Similar to linear regression, the algebraic formula would look something like this:

Neural networks—and more specifically, artificial neural networks (ANNs)—mimic the human brain through a set of algorithms.

From at that place, let's apply it to a more tangible example, similar whether or not you lot should lodge a pizza for dinner. This volition be our predicted outcome, or y-hat. Let'southward assume that there are three main factors that will influence your decision:

  1. If yous volition salvage time by ordering out (Yes: i; No: 0)
  2. If you lot will lose weight by ordering a pizza (Yes: 1; No: 0)
  3. If you will save coin (Yes: 1; No: 0)

Then, let's presume the following, giving usa the post-obit inputs:

  • X1 = 1, since y'all're non making dinner
  • X2 = 0, since we're getting ALL the toppings
  • Tenthree = 1, since we're only getting 2 slices

For simplicity purposes, our inputs will take a binary value of 0 or 1. This technically defines it as a perceptron as neural networks primarily leverage sigmoid neurons, which stand for values from negative infinity to positive infinity. This distinction is important since most real-globe problems are nonlinear, so we need values which reduce how much influence any single input can have on the outcome. However, summarizing in this way will help you empathize the underlying math at play here.

Moving on, we at present demand to assign some weights to determine importance. Larger weights make a single input'south contribution to the output more than meaning compared to other inputs.

  • W1 = 5, since y'all value fourth dimension
  • W2 = 3, since you value staying in shape
  • Due west3 = 2, since yous've got money in the bank

Finally, we'll also presume a threshold value of 5, which would translate to a bias value of –five.

Since we established all the relevant values for our summation, we can now plug them into this formula.

Neural networks—and more specifically, artificial neural networks (ANNs)—mimic the human brain through a set of algorithms.

Using the following activation function, we tin can now summate the output (i.e., our decision to lodge pizza):

Using the following activation function, we can now calculate the output (i.e., our decision to order pizza):

In summary:

  Y-hat (our predicted issue) = Decide to order pizza or not

  Y-lid = (1*5) + (0*iii) + (1*2) - v

  Y-hat = five + 0 + 2 – 5

  Y-lid = 2, which is greater than cipher.

Since Y-hat is ii, the output from the activation part will be ane, meaning that nosotros will order pizza (I mean, who doesn't love pizza).

If the output of any private node is higher up the specified threshold value, that node is activated, sending data to the side by side layer of the network. Otherwise, no information is passed along to the next layer of the network. Now, imagine the to a higher place process being repeated multiple times for a single decision every bit neural networks tend to have multiple "subconscious" layers every bit function of deep learning algorithms. Each hidden layer has its own activation function, potentially passing data from the previous layer into the adjacent i. Once all the outputs from the hidden layers are generated, then they are used equally inputs to calculate the final output of the neural network. Again, the higher up example is merely the most bones example of a neural network; most real-world examples are nonlinear and far more than complex.

The main difference between regression and a neural network is the touch of change on a single weight. In regression, you can change a weight without affecting the other inputs in a part. Still, this isn't the case with neural networks. Since the output of i layer is passed into the next layer of the network, a single change can have a cascading effect on the other neurons in the network.

See this IBM Developer article for a deeper explanation of the quantitative concepts involved in neural networks.

How is deep learning different from neural networks?

While information technology was implied within the explanation of neural networks, information technology's worth noting more than explicitly. The "deep" in deep learning is referring to the depth of layers in a neural network. A neural network that consists of more than 3 layers—which would be inclusive of the inputs and the output—can be considered a deep learning algorithm. This is generally represented using the following diagram:

Diagram of Deep Neural Network

Most deep neural networks are feed-frontwards, meaning they flow in i direction only from input to output. Nevertheless, y'all can also train your model through backpropagation; that is, move in contrary direction from output to input. Backpropagation allows us to calculate and attribute the error associated with each neuron, allowing us to suit and fit the algorithm appropriately.

How is deep learning dissimilar from machine learning?

As we explain in our Learn Hub article on Deep Learning, deep learning is merely a subset of car learning. The primary means in which they differ is in how each algorithm learns and how much data each type of algorithm uses. Deep learning automates much of the feature extraction piece of the procedure, eliminating some of the manual human intervention required. It also enables the use of large data sets, earning itself the title of "scalable auto learning" in this MIT lecture. This adequacy will be especially interesting as we begin to explore the use of unstructured data more, particularly since 80-ninety% of an organisation's data is estimated to be unstructured.

Classical, or "not-deep", auto learning is more dependent on homo intervention to acquire. Man experts determine the hierarchy of features to understand the differences between data inputs, normally requiring more structured information to learn. For example, permit's say that I were to show you a series of images of unlike types of fast food, "pizza," "burger," or "taco." The human practiced on these images would determine the characteristics which distinguish each picture equally the specific fast nutrient type. For example, the bread of each nutrient type might be a distinguishing feature across each picture. Alternatively, you might just employ labels, such as "pizza," "burger," or "taco", to streamline the learning process through supervised learning.

"Deep" machine learning tin leverage labeled datasets, also known as supervised learning, to inform its algorithm, merely information technology doesn't necessarily require a labeled dataset. It can ingest unstructured data in its raw course (e.g. text, images), and information technology can automatically determine the set of features which distinguish "pizza", "burger", and "taco" from one another.

For a deep dive into the differences between these approaches, cheque out "Supervised vs. Unsupervised Learning: What'due south the Deviation?"

By observing patterns in the data, a deep learning model tin cluster inputs appropriately. Taking the same example from before, we could grouping pictures of pizzas, burgers, and tacos into their respective categories based on the similarities or differences identified in the images. With that said, a deep learning model would require more than data points to improve its accurateness, whereas a machine learning model relies on less information given the underlying information structure. Deep learning is primarily leveraged for more circuitous utilize cases, similar virtual administration or fraud detection.

For further info on machine learning, check out the post-obit video:

What is artificial intelligence (AI)?

Finally, artificial intelligence (AI) is the broadest term used to classify machines that mimic man intelligence. It is used to predict, automate, and optimize tasks that humans have historically done, such as speech and facial recognition, determination making, and translation.

There are 3 primary categories of AI:

  • Artificial Narrow Intelligence (ANI)
  • Artificial Full general Intelligence (AGI)
  • Artificial Super Intelligence (ASI)

ANI is considered "weak" AI, whereas the other two types are classified every bit "strong" AI. Weak AI is defined by its ability to consummate a very specific chore, like winning a chess game or identifying a specific individual in a serial of photos. As we move into stronger forms of AI, similar AGI and ASI, the incorporation of more homo behaviors becomes more prominent, such as the power to interpret tone and emotion. Chatbots and virtual assistants, like Siri, are scratching the surface of this, but they are yet examples of ANI.

Stiff AI is defined past its power compared to humans. Artificial General Intelligence (AGI) would perform on par with another human while Artificial Super Intelligence (ASI)—besides known equally superintelligence—would surpass a human's intelligence and ability. Neither forms of Strong AI exist yet, but ongoing inquiry in this field continues. Since this area of AI is still quickly evolving, the best example that I tin offer on what this might look similar is the character Dolores on the HBO prove Westworld.

Manage your information for AI

While all these areas of AI can help streamline areas of your business and improve your customer feel, achieving AI goals can be challenging because you'll first need to ensure that you have the correct systems in place to manage your data for the construction of learning algorithms. Data management is arguably harder than building the bodily models that you'll use for your business. You'll need a place to store your information and mechanisms for cleaning it and decision-making for bias before you can commencement building annihilation. Take a look at some of IBM's product offerings to assist you and your business get on the right rail to set and manage your data at calibration.

savigethoulluse.blogspot.com

Source: https://www.ibm.com/cloud/blog/ai-vs-machine-learning-vs-deep-learning-vs-neural-networks