In this , you'll learn the basic "Hello, World" of ML, where instead of programming explicit rules in a language, such as Java or C++, you'll build a system trained on data to infer the rules that determine a relationship between numbers.
Consider the following problem: You're building a system that performs activity recognition for fitness tracking. You might have access to the speed at which a person is walking and attempt to infer their activity based on that speed using a conditional.
if(speed<4){ status=WALKING; }
You could extend that to running with another condition.
if(speed<4){ status=WALKING; }
You could extend that to running with another condition.
Consider the following sets of numbers. Can you see the relationship between them?
X:
-1
0
1
2
3
4
Y:
-2
1
4
7
10
13
As you look at them, you might notice that the value of X is increasing by 1 as you read left to right and the corresponding value of Y is increasing by 3. You probably think that Y equals 3X plus or minus something. Then, you'd probably look at the 0 on X and see that Y is 1, and you'd come up with the relationship Y=3X+1.
That's almost exactly how you would use code to train a model to spot the patterns in the data!
Now, look at the code to do it.
How would you train a neural network to do the equivalent task? Using data! By feeding it with a set of X's and a set of Y's, it should be able to figure out the relationship between them.
Start with your imports. Here, you're importing TensorFlow and calling it tf for ease of use.
Next, import a library called numpy, which represents your data as lists easily and quickly.
The framework for defining a neural network as a set of sequential layers is called keras, so import that, too.
import tensorflow as tf import numpy as np from tensorflow import keras
Define and compile the neural network
Next, create the simplest possible neural network. It has one layer, that layer has one neuron, and the input shape to it is only one value.
model = tf.keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])])
Next, write the code to compile your neural network. When you do so, you need to specify two functions—a loss and an optimizer.
In this example, you know that the relationship between the numbers is Y=3X+1.
When the computer is trying to learn that, it makes a guess, maybe Y=10X+10. The loss function measures the guessed answers against the known correct answers and measures how well or badly it did.
Next, the model uses the optimizer function to make another guess. Based on the loss function's result, it tries to minimize the loss. At this point, maybe it will come up with something like Y=5X+5. While that's still pretty bad, it's closer to the correct result (the loss is lower).
The model repeats that for the number of epochs, which you'll see shortly.
First, here's how to tell it to use mean_squared_error for the loss and stochastic gradient descent (sgd) for the optimizer. You don't need to understand the math for those yet, but you can see that they work!
Over time, you'll learn the different and appropriate loss and optimizer functions for different scenarios.
Next, feed some data. In this case, you take the six X and six Y variables from earlier. You can see that the relationship between those is that Y=3X+1, so where X is -1, Y is -2.
A python library called NumPy provides lots of array type data structures to do this. Specify the values as an array in NumPy with np.array[].
Now you have all the code you need to define the neural network. The next step is to train it to see if it can infer the patterns between those numbers and use them to create a model.
The process of training the neural network, where it learns the relationship between the X's and Y's, is in the model.fit call. That's where it will go through the loop before making a guess, measuring how good or bad it is (the loss), or using the optimizer to make another guess. It will do that for the number of epochs that you specify. When you run that code, you'll see the loss will be printed out for each epoch.
model.fit(xs, ys, epochs=500)
For example, you can see that for the first few epochs, the loss value is quite large, but it's getting smaller with each step.
As the training progresses, the loss soon gets very small.
By the time the training is done, the loss is extremely small, showing that our model is doing a great job of inferring the relationship between the numbers.
You probably don't need all 500 epochs and can experiment with different amounts. As you can see from the example, the loss is really small after only 50 epochs, so that might be enough!
The process of training the neural network, where it learns the relationship between theX's andY's, is in themodel.fitcall. That's where it will go through the loop before making a guess, measuring how good or bad it is (the loss), or using the optimizer to make another guess. It will do that for the number of epochs that you specify. When you run that code, you'll see the loss will be printed out for each epoch.
model.fit(xs, ys, epochs=500)
For example, you can see that for the first few epochs, the loss value is quite large, but it's getting smaller with each step.
As the training progresses, the loss soon gets very small.
By the time the training is done, the loss is extremely small, showing that our model is doing a great job of inferring the relationship between the numbers.
You probably don't need all 500 epochs and can experiment with different amounts. As you can see from the example, the loss is really small after only 50 epochs, so that might be enough!
The process of training the neural network, where it learns the relationship between theX's andY's, is in themodel.fitcall. That's where it will go through the loop before making a guess, measuring how good or bad it is (the loss), or using the optimizer to make another guess. It will do that for the number of epochs that you specify. When you run that code, you'll see the loss will be printed out for each epoch.
model.fit(xs, ys, epochs=500)
For example, you can see that for the first few epochs, the loss value is quite large, but it's getting smaller with each step.
As the training progresses, the loss soon gets very small.
By the time the training is done, the loss is extremely small, showing that our model is doing a great job of inferring the relationship between the numbers.
You probably don't need all 500 epochs and can experiment with different amounts. As you can see from the example, the loss is really small after only 50 epochs, so that might be enough!
The process of training the neural network, where it learns the relationship between theX's andY's, is in themodel.fitcall. That's where it will go through the loop before making a guess, measuring how good or bad it is (the loss), or using the optimizer to make another guess. It will do that for the number of epochs that you specify. When you run that code, you'll see the loss will be printed out for each epoch.
model.fit(xs, ys, epochs=500)
For example, you can see that for the first few epochs, the loss value is quite large, but it's getting smaller with each step.
As the training progresses, the loss soon gets very small.
By the time the training is done, the loss is extremely small, showing that our model is doing a great job of inferring the relationship between the numbers.
You probably don't need all 500 epochs and can experiment with different amounts. As you can see from the example, the loss is really small after only 50 epochs, so that might be enough!
You’re on a video call with people you haven’t yet met. How would you introduce yourself? Hello everyone, I'm from India originally but have been living in the States for more than a decade now. I am a computer science/software engineer who got into cmc ltd (tata-group) at an early stage of my career and most recently into the cloud. I’m proud to have recently joined Google WTM ambassador as a volunteer and GDG. I’ve worked for myself for many years, am married and struggling with many personal family -situations, and live with my husband and in-laws mostly. How did you get started with Google Cloud? I had a very fortunate opportunity to leave that behind and join what I like to call this “parallel universe” of Google Cloud. I had been seeing that world moving so fast with all this new technology coming in, and in 2016 I joined a Google Developer Group. After reporting to CIOs for so many years on transformation projects and trying to make changes within IT departments, it w...
Introducing new generative AI capabilities for Google Cloud To help cloud users of all skill levels solve their everyday work challenges, we’re excited to announce , a new generative AI-powered collaborator. Duet AI serves as your expert pair programmer and assists cloud users with contextual code completion, offering suggestions tuned to your code base, generating entire functions in real-time, and assisting you with code reviews and inspections. It can fundamentally transform the way cloud users of all skill sets build new experiences and is embedded across Google Cloud interfaces—within the integrated development environment (IDE), Google Cloud Console, and even chat. For developers looking to create generative AI applications more simply and efficiently, we are also introducing new foundation models and capabilities across our Google Cloud AI products. And to continue to enable and inspire more customers and partners, we are openi...
Comments
Post a Comment