Discovering and Delivering Artificial Intelligence Products
Hello and welcome to this webinar on
discovering and delivering A.I. products.
My name is Doug Rose.
One of the key things that I want you
to get out of this is
with artificial intelligence.
You don't build products,
you discover them.
So we're going to find a little
bit more about what that means.
What does it mean to discover her product?
Most organizations are very comfortable
building out products,
coming out with a sort of an idea
and then building it out over time.
And they're considerably less comfortable
just kind of exploring different
possibilities, having lots of questions
and then trying to sort
of discover a new product.
So let's start out by talking about
what it means to discover a product.
I'm going to start out at the very
beginning by talking about what it means
to what artificial intelligence means,
kind of have a working definition
of artificial intelligence.
So artificial intelligence is the ability
for a computer to perform tasks that are
commonly associated with humans.
There's a couple of common
A.I. tools to do this.
You may have heard of machine learning,
which takes massive data sets
and then has the machine learning.
How does the machine
learn by looking at these massive datasets
through machine learning algorithms.
And then you have artificial neural
networks, which takes these little sort
of artificial neurons and uses the human
mind as kind of, ah,
the human brain as a map for how to deal
with these these different
sort of massive data sets.
How you could learn something new
from them by using sort of the brain is
almost a metaphor for how how it can work.
So then you have something called deep
learning, which takes these neural
networks and then creates
several neural sort of neurons.
This creates several layers of neurons.
And the deeper these layers are, the more
interesting, the more the better.
Your your machine learning algorithms are
at finding these really,
really hard to discover patterns.
And so if you have a really deep neural
network, then you can see things
and create patterns or the machine can see
patterns in things that humans
can't even comprehend.
And so when you see something like
Google Translate using something like deep
learning, what it's doing is it's using
a deep artificial neural network that's
using these machine learning algorithms to
look for patterns and how people speak.
And then it translates these words.
It sees these patterns and learns them,
that it translates them by matching up new
data to the sort of the patterns
that it's seen in the past.
And so there's a model and then
the machine updates it by itself.
So now I don't expect you to kind of just
go out now and and with what I've told you
and start building your artificial neural
networks or start working
on deep learning projects.
But I think it's important to understand
kind of what these things are.
So think of them almost like you start out
with machine learning,
machine learning algorithms.
You can look for patterns.
Then you can use artificial neural
networks with these machine learning
algorithms to find really
hard to discover patterns.
And then you can use deep learning
to see really, really difficult
to detect patterns in massive data sets.
And so when you see something like
Google's self-driving car,
they're using a form of deep learning
to kind of collect these massive data sets
as your car drives down the road and try
to sort of figure out
if it can see patterns.
When I see someone crossing the street,
then I know to stop because I've seen
that pattern a million times and I've
collected enormous amounts of data.
So when you see those cars driving around,
what they're doing is they're collecting
these massive amounts of data.
So these deep learning algorithms can find
these very difficult to discover patterns.
So these tools are getting
cheaper and easier.
But when you think about artificial
intelligence in your organization,
I don't want you to think start
thinking about the tools.
I don't want you to go out and start
setting up an artificial neural network
or, you know, start training everybody
intenser flow or something like that.
Instead, I think that the best place
to start for organizations you want
to discover products is to start by
approaching their organizational mindset.
And that's really the main obstacle
that a lot of organizations have from any
value from these new A.I. tools.
So the last 50 years,
most organizations have been
focused on operational efficiency.
There's the creating meeting
and management objectives.
They're going lean
and they're trying to sort of make sure
that the the operational part of the
organization is efficient as possible.
And you see this a lot with Peter Drucker,
who is every everybody's always quoting
that the enterprise must have
clear and unifying objectives.
You must be able to set these objectives
and then remove a lot of the operational
inefficiencies
to meeting these objectives.
But really to get value from A.I. tools,
you have to sort of move away from that.
You have to start thinking
more about science.
You have to think about a discovery.
So but it's when you when you think about
how most organizations operate,
that's really kind of not the way
a lot of people approach new products.
A typical organization will approach new
products by using something like
the typical project lifecycle.
So they'll start out by planning something
new, this is, you know,
when they come up with a project and then
they'll come up with a requirements
document, then they'll analyze
how this project or product is going
to fit in your organization and they'll
try to map objectives to it.
So if you were going to come up with a new
tennis shoe, then you would
go out and you'd plan it.
We're going to build a new tennis shoe.
Then you analyze it by looking
at the market and map some objectives.
You know, we'll want to have
the shoe released in,
you know, two years from now or
in the next quarter or whatever.
And then you design
this project or a product,
you'll describe the features,
you design it out.
And that's where you see like with shoes
or whatever, manufacturing people will
create schematics and things like that.
And then you'll code the product.
If you're working with software,
you'll have developers that are
coding out the product.
If you're doing something like a shoe,
then you'll have people who are
manufacturing the product.
But it's kind of the same result.
This is where you're working to build
out the product and then you'll test it.
You know, in software you have quality
assurance testers that will go
through and test the product.
And in manufacturing, things like that,
you'll have someone who just puts
on a pair of shoes, go for a walk,
send them to some customers,
see what they think,
and then once assuming that the tests all
clear, then you'll deploy the product
and then you'll deliver it.
Software will get deployed to servers.
Shoes will get deployed to customers.
So you have a typical project lifecycle
plan, analyze, design, code, test, deploy.
But eight products are different,
some common products are a next generation
business agent that pops up and can kind
of answer questions that you
type in on a website.
There's object and pattern
detection A.I. products.
I once worked for a paper company that was
trying to use object and pattern detection
to mitigate against any
sort of workplace injuries.
They had cameras set up on the shop floor
and the agent was designed to sort of look
for spills or see if someone left a cup of
coffee on top of an equipment equipment.
And then the I would send out an email
or a notice to try and mitigate that.
And the most common is sort of these EHI
assistance that you see for EHI products
where you have like Alexa or Siri
that uses natural language processing
to kind of do some
analysis in real time to learn
from people's requests and give them
back the information that they want.
And so these are pretty
typical A.I. products.
So you can't really use a standard
development lifecycle
with a products for one.
I mean, it's difficult to have a plan
because you're going to be
learning so much along the way.
When the paper company was making
the cameras the point on the shop floor,
they learned a lot about the types
of injuries that they
might be able to mitigate.
And they learned a lot about
how people might get injured.
So it's difficult to sort of plan that all
out because a lot of it is going to be
discovery and because it's difficult
to plan, it's difficult to have a scope.
It's difficult to sort
of know exactly when to stop.
What's going to be the scope
of your entire product?
I mean, when does the one with the company
say, OK, we've mitigated enough injuries?
Are they going for 100
percent or 90 percent?
So you're just trying to sort of tweak
and optimize the product over time.
And it's also difficult since you don't
have a plan in the scope to come up
with requirements
again, because a lot of these products
are going to be learning as you go.
You're going to be getting better
at natural language processing.
Then it's very difficult to sort
of have very strict requirements.
This is what you do to sort of achieve
this level of functionality.
And you don't know what
the requirements are.
You don't know
what sort of tweaks to the machine
learning algorithm are going to make it
the result more effective because you're
running experiments and you're
trying to optimize.
So it's very difficult requirements.
Requirements depend a little bit
on knowing that if you do a certain thing,
you'll have some sort of outcome.
You don't really have that as much
with A.I. products,
it's very difficult to sort of understand
the quality of an A.I. product because
things start out in a less optimized
and you optimize it over time.
So if you notice with Google,
a lot of times they'll start out with
sort of machine learning,
sort of machine learning tools that are
very effective, like
when they started out sort
of with the Atari twenty six hundred
game that played against itself.
I mean, that was kind of it was great
and it was neat,
but they were just starting out there
and then they optimized it over time
to the point where you could play go or
you could play sort of more
complex video games.
And so it's very difficult to kind
of understand the quality when you're
always improving sort of what you wouldn't
stop there, but instead you're sort
of starting somewhere and then
optimizing it over time.
And because you don't have the plan,
you don't have the scope
of the requirements, you're not sure
what quality is going to be.
Where are you going to stop
the quality then?
It's very difficult to budget
because you don't know when your machine,
when you're a product,
is going to be optimized to the point
where it will be valuable to people.
And so you kind of have to run these
experiments and then improve it over time.
And then if you feel that it adds
real value, then release it again.
When you see a lot of these
companies like Microsoft or Google or
Facebook, what they're doing is,
is they're creating sort of like these
deep learning products that might
not have that much value.
You have, like Microsoft that might be
creating a deep learning product that can
identify humor or comics,
but that doesn't really have
that much commercial value yet.
But they know that you start out somewhere
and then you optimize
and improve over time.
And so it's very difficult to run these
to sort of think of these products
the same way you think
about a typical project.
And if you think about it,
if you if you list out sort of typical
project objectives and you
compare it to a typical A.I. product,
you'll see that like
a typical project might be something like
develop a customer self-help portal where
a typical AI product might be to better
understand a customer's needs.
Remember, we were looking at those agents
are typical project with objectives might
be to create software based on customer
feedback, where a typical A.I. product
might be something like a cell phone
company trying to create a model
to predict customer churn.
Because if you lose your customer, it's
more expensive than getting a new one.
So are less expensive than getting a new
one or a typical project might be
something like create an online course.
But a typical A.I. product might be like
a machine learning algorithm
that helps identify fake news.
You hear a lot about that in the news
lately with climate change
and things like that.
So coming up with an API product where you
are trying to improve the ability of a AI
machine, learning an agent to identify
fake news is something that, you know,
you wouldn't optimize and improve over
time, which is much different
from a typical project.
Another typical project might be
to something like create legacy code
or convert legacy code and update
the server software.
And a typical A.I. product would be
something more along the lines of stopping
security threats where you have to be able
to anticipate something completely new or
look for patterns and identify
patterns that might be hostile.
And that's completely different
from something where you can scope out
the objectives and try
to meet those objectives.
So there's a big difference
between A.I. products and typical projects
where you can use project objectives.
So what I like to do with customers when I
work in A.I. products is to create
an entirely different life cycle,
which is more based on discovery,
which I call the learning lifecycle
so or discovery lifecycle.
So what you want to do with when you're
working on a product is first you want
to kind of identify the roles which I call
the identify, sort of just start
out with identifying the roles.
Think about the different people who are
going to interact with your product,
then ask a bunch of interesting
questions about your product.
OK, so how are we going to what what how
will we approach this? What are
the different ways that we could approach
this? What are the different values
that we what the different ways we can add
value and then research, look at the data,
try to get as much data as you can,
try to crunch it,
get something interesting out of it.
If you have data science teams and you can
work with big data to try and sort of do
something, see if you can create an agent
that's very that does something
interesting with the data
and then look at the results,
share the results with other people
in the company, discuss reports,
try to see if
if this A.I. agent is a product,
is doing something interesting
and then sort of gaining insights from it,
learn to draw conclusions and try
then in the end, create knowledge.
And if you look at how a lot of
sort of A.I. software companies are
working with air products,
you can kind of recognize this lifecycle,
like I said,
with with Google or with other
with Microsoft,
they'll start out with these small
products that don't have much value.
They'll ask some interesting questions.
Can we use Deep Learning Network to have
a video game play against itself and then
do some research,
create sort of have the machine play
against itself a million times,
a hundred thousand times whatever,
and see if it's learning something new
and see if you're getting any interesting
results, if it's improving the model
and then seeing what insights you've drawn
from it, see how you can improve
the product and maybe turn the product
into something that can do something more
interesting, like play a more complex game
like go or some of the more complex video
games and then see what you've learned.
And so this is a completely different life
cycle than what you have
with a typical product lifecycle.
And a lot of times we have seen
organizations do is that they take this
life cycle and they'll
run them in small sort of knowledge,
creating sprints almost similar to how
software works,
where they'll run through every phase
of this life cycle,
and then every two weeks see if they can
produce something interesting
in this helps the team kind of
learn something new and then quickly
kind of pivot if they find something.
So I was working with a company once
that was trying to create a machine
learning algorithm to look through
massive data sets to try and come
up with credit card offers.
And so they were able to sort of they're
playing with the model and they could look
at the results and they noticed that
the machine was actually pretty good
at predicting whether or not someone
was having trouble paying their bills.
And so they were able to run a few sprints
and see if they could come up with a new
product based on the fee,
based on that feedback and the insights
and the knowledge that they got
from one of the shorter sprints.
And so you want to kind of run these is
little short cut of product
deliveries and be able to pivot.
If you learn something new,
if you end up sort of
tied to a really long life cycle,
then it comes becomes much more difficult
for your organization to learn something
new because they're kind of tied into what
they're doing and they can't quickly
pivot based on new knowledge.
OK, so now you've seen a little bit about
what an A.I. product is and you've seen
a little bit about how you can sort
of change your life cycle from something
that's focused on objectives to something
that's a little bit more focused
on knowledge and learning.
It's nice to think a little bit about how
you can change your organization to
to actually be more exploratory,
to be less focused on objectives and more
focused on learning something new.
So an AI researcher named Ken Stanley came
out with a really interesting book called
Why Greatness Can't Be
on the Myth of the Objective.
And he talked about how humans are
actually much better at
discovering something new when they're not
focused on objectives,
that it the that if you if you're able
to explore if you're able to do something
close to a scientific method,
that a lot of these teams can be more
creative and imagine
when they're able to use their imagination
and if they're able to take a more
empirical approach, if they're able
to sort of run small experiments.
And he talked about how this focus
on objectives is actually an impediment
to greatness, that if you want to discover
something new, then you shouldn't focus
on objectives, but you should really
tap into kind of human creativity.
And one of the examples of this is
that humans are actually really good at
it being creative when they don't really
have a lot of information or if they
have massive amounts of information.
Humans are very good at making sense out
of nonsense, out of huge amounts of data.
And so there was an interesting article
that I read in The New Yorker which said
that when they ask people questions
that were sort of nonsensical,
that they were actually very analytical
about it,
that they took a group and they asked them
who was more what was more likely to
exist, something like a yeti or a dragon.
And people could actually go through
the analysis and say, well, you know,
I think a yeti is more likely to exist
because it might be smaller, more nimble,
living in places where there's a lot
of snow, whereas a dragon, we would have
seen these flying around, they're larger.
And if especially if it's fire breathing,
it's more attention getting.
And they asked what's more likely
to exist, a unicorn or a mermaid?
And people are like, well, you know,
we're probably a mermaid because it was
in the ocean and much,
much more of the ocean is unexplored.
So even people, even though people are
asking something that's sort
of nonsensical, fantastic,
that they're actually able to do some
really interesting analysis
and that's very similar to how you want
your teams to think when
you're working a product.
A lot of the data that you'll be getting
when you're working on a product
will require some creativity.
It will require you to sort
of make sense out of nonsense.
And if you're focused on making if you're
focused on sort of being completely
analytical, not being entirely creative,
and if you're focused on objectives,
then you can actually have a lot
of trouble making new discoveries.
And one of the things he talked about is
you want your organization to embrace
serendipity, you want them to be able
to sort of ask interesting questions
and to pursue interestingness,
to pursue novelty.
And some of the examples of organizations
that have discovered something new through
something serendipitous,
like the microwave was discovered because
someone was fixing radio towers and they
noticed that the chocolate bar
in their pocket was melting.
And so it was kind of a
serendipitous discovery.
And they discovered how
they thought about it.
They were creative and they thought, OK,
well, maybe we can make
another out of this.
Plastic was discovered serendipitously,
a sort of byproduct, petroleum.
Teflon was discovered serendipitously.
And a lot of these products were not
objective driven,
but it was sort of a team was working
together and they were creative.
They were able to ask
interesting questions.
And so they were able
to discover something new.
And more recently,
if you're a fan of Silicon Valley,
there was an episode where one
of the software developers was trying
to develop an A.I. product that could
identify whether or not something was
a hot dog and called it
the not hot dog A.I. product.
And so he was created this product and it
was focused on it and doing
discovery and crunching data.
And it was it ended up being really good,
but it wasn't really commercially viable.
Not that many people didn't
want to find a hot dog.
So in the end of the episode,
he ended up selling it to Instagram,
Instagram as a way to sort of filter out
whether or not someone was uploading
the wrong kind of pictures.
And so it was, you know,
this is a pretty good example of a product
that started out going in one direction
and then through creativity and looking
for interestingness and pursuing novelty,
it was able to pivot
and do something else.
And remember, you want to run this life
cycle in short sprints so that you could
sort of pivot quickly and do
and look for something interesting.
If your project is completely focused
on objectives and you're going to miss
a lot of opportunity
to discover something new.
And again, when you look at a lot
of the companies that are focused
on products, this is
exactly what they're doing.
They're working on products that might not
have that much commercial value,
but they're learning something and they're
learning how to work with the technology.
And the and the machine is
the machine is updating its model
and they're improving their algorithms.
Professor Stanley described these
discoveries is like stepping stones,
is that each time you learn something new,
you're taking a step
closer to your product.
So with not hot dog,
with Google creating an Atari twenty six
hundred algorithm, that each one of these
is a stepping stone
to creating something new.
Now, these companies might not know what
the end result is going to be because you
don't really know what all the stepping
stones are until your end
at the end of the path.
But each time they take a step,
they're learning something new.
And so you only know at the end
your pathway going back.
And this is exactly what's happened
in a lot of the products I work with.
With the shop floor, they
learn something new and they were able
to sort of optimize their algorithm
with the credit card processing firm.
They were able to spin off a new product
that they completely didn't anticipate
because each step that they took got them
a little bit further than they weren't
focused on objectives, but instead
they were focused on learning.
And so that let them develop some really
interesting eye products that might seem
strange to sort of use words like
creativity and serendipity when you're
talking about product development.
But if you think about it,
a lot of us sort of have very
serendipitous things happen and we
don't even really think about it.
If you think about your career,
maybe when you were in high school or
college, you had very well
set career objectives.
But then something serendipitous happened.
You picked up a job you weren't expecting.
You got a promotion that you weren't
expecting in your career,
completely went in a different direction.
And so the fact that you were able to sort
of take advantage of that as a person
probably really made your career,
really changed your career.
But as an organization,
their teams are not very well structured
to take advantage of the same thing.
And so they would say, well, you know,
this might be a separate,
serendipitous thing that happened,
but I've got these set objectives and so
I really can't take advantage of it.
So you have to really think about this.
When you're trying to develop a product,
you want to change your organization so
they can think about these serendipitous
stepping stones, learn from it and build
out a product kind of like
how you might do as a person.
But it's much more difficult to do as
a team working in an organization.
And much like data science,
I found that really small
teams working on A.I. products
structured in a way that's consistent
with the scientific method,
gets much better results.
You have sort of this three person team,
which is sort of focused on discovering we
have a knowledge explorer,
a data analyst and a servant leader
working together in these tight
teams to find new products.
Now, a lot of times if you have a good
data science team,
they'll be structured this way.
And then you could have your data science
team also work on your machine learning
products and sort of a natural
progression, because data science teams
are using the scientific method
to explore massive data sets.
And so they might end up using machine
learning algorithms or artificial neural
networks to crunch that massive data
and then churn out data products.
So it kind of makes sense that they would
have a very similar team structure.
But one of the most important roles
in this team is the knowledge explorer,
and they should sort of think about things
differently than the rest of the team.
If you're read,
there's a book by Daniel Pink called
A Whole New Mind, where he goes over some
of the new skills that are much more
valuable in organizations as they kind
of start to invent and do more interesting
things and develop products like some
stuff with A.I. And he puts much more
emphasis, which I think is correct
on story over reporting,
which is you want people in your
organization to be able to fashion
a compelling narrative instead of focus
completely on reporting what's happening.
You want people who can look
at Symfony over detail.
Organizations tend to favor people
who are very detail oriented.
But for A.I. products,
you want to have people who can look
at the big picture so they cross
boundaries and be able to contribute
pieces to the overall whole sort
of someone who's able to see something
really big picture instead
of focusing on the details.
But there's a lot of detail oriented
people in your organization,
so you might need to sort of train people
up in this new skill set or find someone.
And that these sort of people in your team
should be called should
have empathy over certainty.
Instead of focusing on objectives,
they should sort of look at what makes
the their fellow human tick,
kind of understand
how people forge relationships and how
might people might care for others.
Now, the name for this role,
which I like the most,
which I've seen a lot of organizations,
is the knowledge explorer,
sort of the I think that this kind
of encapsulates how this person should
think about themselves as they're trying
to gain organizational knowledge
and they're trying to do it by kind
of exploring the data with the team
so this person won't be in charge
of crunching the numbers.
That might be sort
of that the data analyst.
But there would be the person who's asking
interesting questions,
making sure that the teams are focusing
on objectives and instead looking at these
sort of questions and seeing if you can
learning something new and being
able to help the team pivot.
If you find something that's
serendipitous, if you make a discovery
that you weren't anticipating,
which when you're working with massive
data sets you're working with machine
learning is not uncommon.
Again, you're going to find a lot of new
stuff when you start sort of crunching
these numbers and when you start using
machine learning algorithms to look
through the data you already have.
A really good example,
this is a few years ago,
there was a professor at Cornell called
Professor Klineberg who came up
with a really interesting challenge.
He wanted to see if he could have his
students discover on Facebook whether or
not people were in a relationship
in a romantic relationship.
Now, I know on Facebook now at the time,
they didn't have you tag whether or not
people relationship.
So he was just kind of using the data
that was out there at the time to try
and figure out if he could
make this connection himself.
And he used a very
sort of nonobjective driven
approach was to try to be creative.
He he
had he acted kind of as the knowledge
explorer and the team acted as data
analysts and they tried to figure out if
they could make meaning out
of this massive data set.
And when they got together and they were
asking questions and the question meaning
they kind of talked it out, remember,
you know, empathy over
empathy, over certainty.
And so he talked about sort of what
it was that made people tick.
You know, how how did you know when people
might be in a romantic relationship?
And one of the things that came up is
that when people are in a romantic
relationship, a lot of times they'll end
up becoming friends with people that
that's their new partner,
that partners, friends.
So they'll take on a whole bunch
of different meet a whole bunch of new
people that have been friends
with whoever their new romantic partner
has been friends with.
And so they thought about how
that could represent that in the data.
And they came up with this chart,
this little visualization here,
which shows that you can see kind
of the dark concentration of new
friend requests in the Facebook data.
And you can kind of see that
a lot of people are meeting new people,
and that was the way they figured out
accurately whether or not people were kind
of in a new romantic
relationship with one another.
And so that's kind of the way that you
want to think about this data.
When you're working on a product,
you want to sort of be able to have
empathy, be able to pivot the product
based on the knowledge that you create.
And if you find something new and again,
they couldn't create sort of an objective,
they couldn't say,
here are the steps we're going to take
to find out if someone's
in a romantic relationship.
They had to ask interesting questions
and they had to explore the data to see
if they could find something interesting.
So
one of the key things that I want
to emphasize is that it's when you are
building out the products,
a lot of technology that you're going
to get is going to be
either free or not very expensive.
You can use tens for you can
download Python libraries.
So a lot of the challenge around A.I.
isn't really technical
as much is cultural.
You have to change your organization.
And it's also about having the people
who are on your team
embracing the right mindset.
So some of the key things that you should
keep in mind is does your organization
have kind of an agile mindset?
Are they able to to think about things
that they could deliver quickly?
That's almost one of the first steps.
Can they can they think about things
and deliver value quickly?
Does the organization already make data
driven decisions? Do you have data science
teams that are in place? Hopefully they're
using more of these small teams,
taking a creative approach
to looking at their data.
Does the organization tolerate
and even value change?
If you're in a very conservative
organization that focuses on structure
and certainty that it's going to be very
difficult to deliver a product because
a lot of it, you're going to be learning
something new and you're going to have
to be able to pivot and ask interesting
questions and try to sort
of discover something looked for.
Interesting.
This is Professor Stanley says.
And does it have the right reward
environment
for people to experiment? I mean,
if you're if you're start out by saying
you want to do things one thing one way
and then you learn something new
and it's a better way to do it, are you
going to be penalized or rewarded for it?
So you want to be able to sort of have
people be rewarded for discovering
something new, discovering
something interesting.
And so you want to take these small steps
with these products that might not
have that much value at the beginning.
And then through serendipitous
discoveries, take these stepping stones
to learn something new and deliver
something that's valuable.
Now, it might sound like
that's completely different from how your
organization operates, and it might be.
But if you look again,
if you look at how these A.I. products are
being developed and some of the
organizations that are using the mouse
like Google and Facebook and Microsoft,
this is exactly what they're doing.
They're developing these products
that don't have that much value
and they're refining them over time
until they do have value and they pivot
based on what they learn.
So this is kind of the way that you
want to deliver these products.
So here are five key takeaways
I want you to have from this,
so artificial intelligence are getting
cheaper and more widely available so you
shouldn't think of delivering
a product is a technical challenge.
The tools and you should think of it
more as an organizational challenge.
The tools are not going to help you unless
your organization has the right mindset.
Do you have people in your organization
that are comfortable discovering?
Can you create these small teams that can
pivot and learn something new
to deliver a great product?
Have your organization.
If you're starting out with they have your
organization focus on exploration
and not on objectives.
You look at Ken Stanley's book and see how
objectives are actually in the way
of trying to discover something new.
And he learned this a lot
from his own research.
Embrace and don't suppress
serendipitous discovery.
A lot of times when you're in a large
organization,
there'll be a lot of people there
with different backgrounds
and doing different things.
And so you might discover
something serendipitously.
You might figure out how people are,
the way people are getting
injured on a manufacturing floor.
You might figure out some way to predict
that your customer might have trouble
paying their credit card
bill by looking at the data.
Well, so you want those people to make
these serendipitous discoveries
and then roll it into your product.
And finally, you want to work with small
teams, much like data science.
You want to work with small teams
to explore your data
instead of just having sort of one person
who's an eye specialist
or a data scientist.
So small teams increase the likelihood
that you're going to have
a serendipitous discovery.
And you also don't have to then focus
on just hiring one person to take
you to where you need to be.
So
all of these five takeaways will hopefully
help your organization kind of get on
track to start delivering these products.
I hope you enjoyed this.
And good luck.
Thank you for watching.
The Perceptron In my previous post, “The Hidden Layers of a Neural Network,” I presented a simple example of how multi-layer artificial neural networks learn.
Like people, machines can learn through supervised or unsupervised learning. With supervised learning, a human labels the data. So the machine has an advantage of