Testing an online deployment
After having created an online deployment of model, you can test it out directly from your browser. Navigate to the details page of the online deployment you are interested in and select the version of the deployment you wish to test.
If the model you wish to test accepts image input, you can use the file picker below the input textarea. Click on the Browse button and find an image you wish to feed to the model.
In this example, the deployment is based on a model of the Image Classification model type. It therefore accepts images and, in the dialog (shown below), we have found a suitable to use for testing the model.
If your model does not accept image input, you can enter some suitable input directly into the white input textarea.
In the case of models accepting images, choosing an image from your computer using the file picker will automatically fill out the input textarea. By default, the file picker will generate an input formatted as JSON containing a dictionary with a single key named instances. The instances key contains an array with just one object. This object is also a dictionary with just one key named images_bytes. The images_bytes key contains an object with the key b64. The value of this key is the image formatted as a base 64-encoded string. This follows the structure that most image-based models deployed in Google Cloud ML Engine require. Learn more on Google's documentation page on issuing predict requests to Google Cloud ML Engine.
You must make sure to provide an input that is compatible with the model. In the specific example of the Image Classification model type, the input must match the settings you provided when creating the model in the first place. This incldues providing an image of the right format (e.g., BMP) and with the right amount of color channels. In the example below, an image with the right format for the specific model (BMP) but with an incorrect amount of color channels (1 instead of 3).
In the example below, a eligible image has been chosen instead. We can see that the model believes that this image is of the good class with 99.29% probability, which is indeed correct.
In the next example, an image of the class missing_cap has been run through the model and we can see that the model believes this image to be of the missing_cap class with a probability of 99.94%.
This tool provides an easy to test out whether the mechanics of the models work out as expected. Validating the model by running examples through the model one at a time is not recommended. If your goal is to validate the model, you should rather check out our articles on model validation. The purpose of this tool is only to test whether the interaction with the model works as anticipated.