“Introduction to Custom Vision” – A Poem

Background: The following is a transcript of a presentation I delivered on Microsoft Custom Vision (which this article is based on). The transcript was auto-generated using Microsoft Video Indexer (https://www.videoindexer.ai), and showcases the (reasonable) accuracy and unintentional comedy value of the platform. To be read in iambic pentameter in a moderately darkened room.

An Ode To Pickle Kobara
By Matt Tank

Hey everyone, my presentation tonight is all about birds.
Almost specifically.
Not not like that.
That’s better Microsoft custom vision featuring birds.
Custom visions are part of the cognitive services suite,
which is Microsoft’s iOS service offering.
Um is part of the Vision Toolkit along with Computer
Vision,
which I’ll quickly explain 1st.

So I competed visions an image classifier.
It’s preconfigured an ready for you straight away,
so all you do is upload an image.
An it’ll tell you what’s in there.
With varying degrees of success,
it will also tell you what the scene is in general,
so in this case. We’ve got a dog holding a
bowl being chased by a panda in the ocean so
there’s some pretty useful applications for that.
Yeah, I guess I’ll pick bowl outside there’s some pretty
useful applications.
You could run, it on your website just to make
sure that if someone’s uploading an image.
They know what it’s supposed to have enough you’ve probably
also seen it before it’s built into Windows 10 photo
editor.
Does things like categorizing your photos based on the subject
of the file?
And so that’s computer vision?
How does custom vision differ?

Well remember the picture of the dogs on the beach?
Yep, that’s it. Sorry computer visions correctly identified most of
the objects in there.
But what if dog and Bowl isn’t enough.
Well, that’s where custom vision comes in.
The advantage of custom vision is that you can classify images,
however you want so it’s not pre built.
It’s not preconfigured already to use you’ve got to actually
make it work,
but that’s where it’s Power Wise.
So.
With computer vision. This image gets picked up as a dog.
With custom vision. It knows its elaborate all so long
as he takes it that that’s what to look for.
So how do you use it head over to custom
vision dot AI an right now using your Office 365
account?
Or, a Microsoft account. You can create a limited trial
project.

And for real world implementations.
You want to connect it to a jewel like so
you select your is your subscription.
And then a service here,
So what you’ve got here is the fray FOT,
which essentially has the same limits as the free trial.
Or for a few bucks,
a month. You can go to the standard tier.
The SO1, which has a much higher threshold of images
and projects and so on.
I also expect that they’ll be a premium tier coming
soon once usage of the service speaks up and that
will be useful for those enterprise level projects.

So once you’ve created you project,
the first step is to upload images and tags on.
So that’s where the bits come in.
So first of all we create tags on the left-hand side here.
Upload your images now it’s important that you actually upload
a lot of images per tag.
Use different angles poses and so on.
Get as many images out there as he can for
each tag and then assign each each in each with
one or more tags.

So once you’ve done that the next step is to
train a classifier so seriously.
All you do is hit the train button.
Don’t need to pick out it repeatedly it takes a
little while to run through,
but once it does. You will have a classification model
that you can upload some images to and see what
it spits out.
So, in this Case No problem.
It’s pickle kobara 99.9%.

So once you’ve uploaded a photo for training.
Uh it saves it to the service so that you
can then go and.
Tag it, I need to confirm what it’s predicted or
correct any mistakes,
which is good for when everything goes pear shaped.
But who’s ever seen a hookah bar do that to
someone.
Right but Let’s see so we’ve seen how performs with
a distinctive image and a distinctive subject.
But what about when it’s not so easy.

So on the left hand side here,
we’ve got our marry Magpie and on the right hand
side.
The Demon Helbert from the slide before.
Or the Australian Magpie. And so that’s where I leave
the relative safety PowerPoint and put Microsoft money where my
mouth is.

Site is fake.
So I’m just going to head over to the custom vision.
Console so that’s it, there,
you can actually train your images directly from the console.
So if I hit the quick test button there.
Why is it boy? So I can paste in URL.
Uh of an image from the Internet and hit the
Triangle Button,
there, so this is a graceful why this.
Of course, it is, and uh in chicken 99.9%,
so but what about the hotter images,
so the tests, I was talking about before.
Well, for that I’m going to head over to my
power apps app that I developed because it’s idiot proof
and I’m not a developer.
And. That’s going to use the prediction Ipi to send
the to send the URL too.
To custom vision? So again I just paste it in.
Alright should show me the image.

Hit the go button and we’ll see how it goes.
All right, so it’s uh it’s picked the Marie Magpie,
there so the correct burdan.
It’s pretty sure so 97%.
That grain doesn’t come through very well.
Uh so you can also use the API to upload
images,
so will do that now so the second image will
be a?
Picture that I’ve saved to my hard disk somewhere.
Workout with that, we know it’s going to come up.
There we go so this is our Australian Magpie.
So I will give the Identifier Button,
I press then we’ll see what comes back there.
99.7% at its peak, the right one,
so no problems at all with the two similar looking
subjects.
So that’s how it works in general.

What are the practical application of applications of it so
head back to PowerPoint?
If you are running a winery you could input pictures
of all year,
you vines and get it to identify.
Great diseases. Say.
You upload your images. Custom vision will identify the ones
that are diseased and you can use the location tag
in in the image itself to say OK?
Where is that in mind?
In my opinion

Alternatively, fisheries could create a app where fishermen can upload
pictures of their catch.
Custom vision will identify identify it and.
It will spit back the identification along with some other
information so like bag limits or bite limits or things
like that.
You can also set a threshold on that as well.
So you can say OK if it didn’t.
If it didn’t really confidently identify.
The fish that came in.
Diges sadly identification back with a low confidence say.
I need to be more sure get a clearer photo
I send it through again.

So by default the service.
Uses Azure Connect to the Internet every time but there’s
also a streamlined machine learning models that you can.
Use on your projects, which can be actually downloaded into
the app.
So it does sacrifice accuracy a little bit.
But what it means is that you fishermen can be
out on a bike with no Internet access and you
can still use the service.

So after using this service water I found well make
some simple mistakes but it also surprises you sometimes.
Sometimes it picks up the differences between 2 subjects that
the human icon so both of those points come down
to how.
The machine learning algorithms interpret images compared to how a
human does.
The number and variety of the training images is decayed
so the more images.
You have per tag per subject.
Uh the more poses the my angles more colors last
stages,
etc, etc. The more accurate any predictions are going to
be.
And finally, it’s really easy to set up so within
half an hour you can have a working model.
With a handful of tags correctly,
identifying the subject in each one and you really only
need basic level,
it skills to do it.

So before I get to questions just one last thing
hot off the presses.
Microsoft actually updated the service in October and they’ve introduced
into preview and you project file so the object detection
project type.
It won’t just identify what your images.
It will actually look inside the image and it will
pick up multiple objects.
Tagging each one and telling you where in the image
it is.
So, in this case, it’s picked up a couple of
fish.
Identify the species of each put a bounding box to
say this is where each one is.

So that’s useful for any applications where.
Where it’s not just enough to say OK?
What’s in this in each now.
If someone uploads an image with multiple?
Things in it, and you’ve got my controller that well.
You can’t just have it say this images of eggs.
Because there’s 2 things in the image.
So that’s going to be useful wanna keep an eye
on.

So that’s it for the presentation back to the questions
any questions.
How many images do you think it would be the
moon but you need to be uploaded contractual recognize them?
I look at 20 for the standard image classification.
Um. For the object detection.
It’s a waste 15 and so I probably double that
about 30 images.

And so the differences with the object detection when you’ve
actually got a when you upload it rather than just
telling it what?
Using the image you’ve got to actually drop box around it.
To say what it is,
and where it is.
Anything else

Just in Annapolis had this for a long
time,
and my Mac for upload just images of the family
photos ever done.
Bring up the different people and I can say that’s
yeah.
That’s right and Google’s got the same thing in there,
they Google Drive. File app as well so we’re seeing
more and more of a commercially and this is a
way that way is consultants can actually use it in
our projects. So I don’t know what the what the
competitors are alike in that space how easy it is
to actually access those.
But this one is one that’s ready to go now.
Again thanks a lot thank you answer.
No, that’s it.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s