June 4, 2019

The Chris Butler Hypothesis: Adversarial Product Management Gets to the Core of What Really Matters Using Contrarian Thinking

Chris Butler is the Chief Product Architect at IPsoft, with years of experience working in AI-related products at Philosophie, Complete Seating, and Horizon Ventures. We talk about adversarial product management and more.

The Chris Butler Hypothesis: Adversarial Product Management Gets to the Core of What Really Matters Using Contrarian Thinking

Chris Butler is the Chief Product Architect at IPsoft, with years of experience working in AI-related products at Philosophie, Complete Seating, and Horizon Ventures. We talk about adversarial product management, how randomization can help improve your decision making, and the challenges large organizations face when they try to disrupt themselves.

Subscribe for the full episode on Apple, Google Play, Spotify, Stitcher, and more. Love what you hear? Leave us a review, it means a lot.

Resources

Questions We Explore in This Episode

How did Chris’ experience helping his graphic designer dad adapt to technology get him started in product? How did his training in engineering influence his thinking? What was it like to be a program manager at Microsoft? How did Microsoft invest in product leadership?

How did Chris use his experiences to transition into being a product-focused founder? What were some of the reasons that his business failed? Why did Chris move into business development for a while at Waze? Why did he come back to product management?

What are the challenges of working on products in AI? What misconceptions do big organizations have about how to disrupt themselves? Why is it so hard to change business models? What can is dominant logic theory and what can we learn about it?

Why would Chris ask people to sketch bad ideas? What are “crazy eights?” What are the benefits of fast brainstorming? Why does Chris encourage people to draw more than they write? How does Chris introduce randomization into the brainstorming process? How does he help junior product people make decisions?

What is “adversarial” product management? What is discursive design or provocative prototyping? What is a “red team” and what is red teaming? Why is your current mindset your biggest adversary? Why do you need to go after what is surprising in user interviews? How do you integrate more randomness into your work?

Quotes From This Episode

"Adversarial product management is really about how do we use contrarian thinking to get to the core of what really matters. - Chris Butler"
"When we talk about research, when we talk about, doing great work, it's about going after the things that you don't know a lot about. - Chris Butler"
"Evidence is really, really important only when it's surprising because it's not surprising, it doesn't matter that much. - Chris Butler"

Transcript

Holly Hester-Reilly: Hi and welcome to the Product Science Podcast, where we're helping startups, founders and product leaders build high growth products, teams and companies through real conversations with people who have tried it and aren't afraid to share lessons learned from their failures along the way. I'm your host, Holly Hester-Reilly, founder and CEO of H2R Product Science.
--- ---

Holly:
This week I had a conversation with Chris Butler. Chris Butler is currently the chief product architect at IPsoft and prior to that he was working in AI-related projects at many companies including Philosophie, Complete Seating and Horizon Ventures and he's been involved in product and business development for over 18 years including companies like Microsoft, KAYAK and Waze. Christopher created techniques like empathy mapping for the machine and confusion mapping to create cross team alignment while building AI products. He and I had a fast paced conversation, listen on and I hope you enjoy.

Holly:
This week's guest for the Product Science Podcast is Chris Butler. Chris, can you tell us a little bit about what you're doing these days and how you got to this role?
Chris Butler: Absolutely. So I'm Chris Butler. I work as the chief product architect at IPsoft, which is a company that's really all about automating all of the less interesting parts of work, usually for IT organizations, customer support organizations, things like that. It's been around for about 20 years now. I recently just started about two weeks ago. So very, very new role. But probably if you want to dive into the history of my work, it all really started with my dad who was a graphic designer, but an old school graphic designer.

Chris:
And so that meant that he was using things like spray adhesives, tissue paper, cyanotype, all this type of thing. And I was really the first person to help him get into the digital age. So we had our MAC II that was the fastest graphics machine at the time I think... And so a lot of my early work was really around apprenticing in graphic design and design itself, but more from a technical standpoint and actually ended up helping some of his clients do web-work somewhere around 25 or so years ago.

Holly:
Wow!

Chris:
The first html that I actually wrote was a very, very long time ago. We didn't have CSS at that point.

Holly:
No, no, definitely not. Did he do things with spray adhesive and different materials for advertising purposes or for-

Chris:
Yeah.

Holly:
... Okay. Interesting.

Chris:
That's right. Most of it was for print advertising. My mom was actually a producer for TV commercials for a little while and so I came from a more commercial art setting. And actually when I ended up going to apply for schools, I think his dream was for me become an industrial designer in automotive. I disappointed him by going into engineering, but it was because I brought my zip disk full of commercial art and down to our say like art center, and art center then asked, "Where are all your sketches? Where are all your sculptures?" And I'm like, "Hey, I'm a commercial artist. I'm not a regular artist." So, that wasn't the right place for me. So I ended up doing computer systems engineering out of Boston University. And, a lot of the rules that I looked at right out of school or getting out of school were mostly engineering positions, so things like Motorola or General Dynamics, but really hardcore engineering roles, mostly in the security realm.

Chris:
And that was because I did some stuff in high school around computers and phones, which we don't have to go into too much depth within this interview. But, one of the things that really was interesting about the interview I had with Microsoft, and so they put me on a program manager track upfront, and the most interesting question that I really got there was, "How would you redesign a washer dryer combination for someone that was sight impaired?" And so that to me was a really, really interesting question. It was actually much harder to figure out than any of the other technical questions that I had been asked by other organizations. And so, I started there actually in the Bay Area in a satellite office of about 3,000 people from Microsoft, which is called the Silicon Valley Campus or SVC, and started working on MSN Calendar, which was actually a recent acquisition called Jump.

Chris:
A lot of the people that were from the Microsoft Bay Area group that I know of came from that acquisition of Microsoft. But we were all under the Hotmail umbrella. So I spent about seven and a half years at Microsoft as a program manager working my way up to actually manage teams of program managers, but worked on a lot of different things including Calendar, Hotmail, desktop search, toolbars. And then the last project was Windows Live Gallery, which was a photo app store for Microsoft. So before even the iPhone came out with its app store. It was really thinking about it like how do you, at the time we called them gadgets or widgets, but how do you allow people to distribute really small pieces of functionality? And there, yeah, it was a really interesting role. I mean definitely managing other program managers ... Within Microsoft program management tended to have aspects of both product and project management.

Chris:
And so, how you do that was something that was really key to Microsoft. I think that was one of the benefits of being in Microsoft for about seven years was that they really invested a lot into how you become a good product leader. So there were boot camps. I wrote a lot of articles internally about how to do the best specs with Microsoft Word. I think one of my claim to fame for the people that had worked with me was that one time I had to write a specification, that was about 500 or so pages long.

Holly:
Oh my God.

Chris:
Yeah. And that was all about how we get a MSN Calendar and Hotmail to start to do meeting requests and integrate with Outlook Exchange and at the time Lotus Notes.

Holly:
Interesting. Why was it 500 pages? For people from today's products world that sounds insane. How did that happen?

Chris:
Well, the reason why was that we were really trying to convert from three or four different RFCs, which are basically like standards on how the internet works. And so, Icacls was this way over extended format, that allowed you to essentially put, date and time information for a lot of different applications. And so, in scale from say, NASA, they needed to collect a lot of information about satellite data which goes down to micro-seconds. But then you also need to talk about, potentially Icacl would be used to collect histories of hundreds of years. And so there's all this crazy stuff including meeting requests and how it integrates with SMTP and email, and it just was an insane standard. I mean, the standard itself was probably thousands of pages long, but now plugging all of this data into how it works within Hotmail and Calendar was quite complex. And especially being backwards compatible.

Chris:
So I ended up doing like a day's worth of basically a spec review of different segments of this spec. What was most impactful about that experience though was that ... I used specs as a way to really collect my thoughts, so I'd go over them over and over again. I would keep adding information as I learned things from, at the time, mostly engineers, mostly the designers, sometimes we'd go out and do customer interviews, but we really did it very rarely. And so what I started to realize though, is that even if I wrote this 500 page spec and I went over and over and over again, there was always going to be times that I was wrong in some way. I guess that's what set me down the path of really embracing more agile mindsets around this, but you're never going to know everything about what you're trying to build. So that was really impactful for me, especially [inaudible] with that 500 page spec over and over again.

Holly:
Yeah. Before we move on to the many things that have come after that, I wonder if you could share a little bit more, you mentioned that being a program manager at Microsoft at the time was a bit of a mix between product and project. Can you tell us a little more about what you mean by that? What sort of things were you doing?

Chris:
Yeah, absolutely. Program managers, they would really help push the engineering team to do the right stuff, like to work on the right thing. And so what that meant was that we would have to figure out what we wanted to build and generally working with people that were like product marketers essentially, that would have a lot of user panels, that type of information. Definitely not the type of user research that I would consider to be best in class today, but ended up being, "Here's market trends," taking all of that and then figuring out what features to build.

Chris:
And because Microsoft is a very large place, you could end up where ... I had heard that Outlook or in Word, there was like one person that just did spell-checking. And that they would write all of the spec facts about spell-checking. They would do all the work around spell-checking. I mean, I'm not actually sure if that person actually existed or not, but that was always the story that we heard internally. [inaudible 00:09:10].

Holly:
It was actually just a magical idea and there were really five people.

Chris:
There was actually a whole team with like a group product manager or program manager that was there. We would talk a lot internally also that from program management, at least program managers would take about two years to become worth anything. And the reason why was because you're building up all this intuition or experience or gut feel from the standpoint of working with engineering leads a lot of the time would be your peer, if you were an individual contributor, with QA teams, with Ops teams, if you're doing web stuff. And so a lot of that was trying to figure out ... The process of building was what was within the domain of the program manager, I think the upper managers would do a lot more where they'd come to solutions based discovery, that type of stuff. And so, yeah. Anyways, that's what I'm saying.

Holly:
Okay. Interesting. So what happened after that for you? How did you move on from there and what came next?

Chris:
So I went through, a cycle of trying to be a business development person for a little while. And so took on roles that we're hybrids between product management and business development. So that included companies like Waze where I was director of North America BD, but half of my role was to figure out what tools do we need to be able to do these deals. And specifically, the types of deals that we're doing for ways would be like doing a partnership with a local TV station to use Waze on air during morning traffic reports. So it was a humongous growth engine for us.

Holly:
Interesting.

Chris:
Yeah.

Holly:
I was just going to say I think with a lot of modern products managers that I talk to, unless of course they're in the B2B space, BD might be a bit foreign. Can you tell us a little more about why you went trying to do more BD, business development?

Chris:
Yeah. The reason why I went after BD was really because of this gateway role right after Microsoft where I joined a company called Dash Navigation, that was a connected GPS device. So this is before iPhones. So having a GPS device that sat on the dashboard of your car that actually has a cellular modem meant that we had the ability to start to create services that could be consumed in car. My role was actually a technical evangelist. The difference between say like an evangelist and say a business development person is that an evangelist is really for some type of solution that doesn't have an industry yet. In the case of the connected device in the car, it was like, what would it mean to have connected services in the car?

Chris:
The device inherently would include things like traffic and map updates, but what else could you do once you had APIs in the car? And so my role was to go out about 50% of the time and really talk about the benefits of that. And the other 50% of the time was to figure out like, what are the platform features we need to build ... In this case, what is the app store, we needed to build, so we ended up getting about 80 apps in about six months on that particular device. But that really opened my eyes up to what business development is. And really business development is this idea that ... If we think about product management as trying to create a solution for someone to a problem, you're trying to create a solution to a problem for someone, you're using most of the time your internal resources within an organization.

Chris:
With business development you're finding other organizations that are outside to really combine your effort. And ideally, I'd say like the crux of the real work and it goes into business development, is not so much negotiation or contracts because that's definitely part of it. But it's this idea of finding some type of philosophical middle ground that you can really believe in together to be able to move forward. And so for me, the extension of the product role into business development is very natural. I think really good product people reach outside of their organizations a lot of the time. I think the reason why I eventually got tired of doing BD roles was that, it was very hard to provide feedback to a product team as a business development person because they think of us as all Douche bags that are having champagne lunches all the time versus the idea of being like a product manager that is very friendly to BD, that's much easier.

Chris:
And I think that's my passion too. So, yeah, basically after Waze, I started my own company which was called Complete Seating. And Complete Seating was a restaurant operations software. But what was maybe more unique about it in comparison to say something like OpenTable was that we were really focused a lot on how we're using what we'd call data science now or what we'd call business intelligence back then to be able to really make the operations of a restaurant much, much better and much more efficient. And so did a lot of really interesting stuff there. I think the thing that was most interesting for me about that particular role as a founder and a product based founder, was that I spent an awful lot of time in restaurants. So at least on a weekly basis I would be sitting there as a host and [inaudible] as I was basically ... And I didn't know it was called contextual inquiry at the time.

Chris:
But it was something that was incredibly impactful that coming off this experience of writing lots and lots of specs and then suddenly being very viscerally involved in the solution that I'm trying to build allowed me to make much better decisions. And so that was really, I think the beginning of the love I have for user research started with that particular role. Yeah.

Holly:
That's a really great example that you were in the restaurants, performing that role. One other element of that I want to hear more about is just going from being a product leader, BD for a little while to founding your own company. Did you raise funds? Did you bootstrap? How did you make that transition?

Chris:
Yeah, so we bootstrapped pretty much the whole time. And I would say actually that was ... looking back at it now was maybe one of the reasons why we failed. I think there was a couple of other reasons too, which is that we didn't build the right type of trust with our end users like we should. And this is something that I think a lot about when we talk about the practice of building great AI products today. So we end up really talking about that more. But it was hard. I mean, the idea of being a product person means you're always working with other people that are doing other roles. And so at Microsoft, you had access to a lot of resources to do that, in your own company, like Complete Seating, it was me and another technical co-founder. And so I wore a lot of different hats.

Chris:
I would actually go out and do sales. I would [inaudible 00:15:29], I was working in a restaurant on a weekly basis. I wasn't getting any tips by the way. And then, I would also be doing engineering. I was doing design, I was doing specifications. In this case, we took a very relaxed approach because it was two people we ended up using ... We were using Pivotal Tracker at the time. We did a lot of work around how are we doing continuous deployment, continuous integration. And so to my standpoint, we invested an awful lot in making our lives easier up front. And so that meant that we probably had the stable product that two people could have built for high demand type of industry like restaurants. Yeah, so it was hard. I mean, I think the thing that was most difficult is that a co-working spaces, we're also not as in vogue as they are now. And so I was alone at home most of the time because my co-founder was halfway across the world as well.

Holly:
Yeah. Can you place that in time for us? When was this?

Chris:
Yeah. Let's say that was probably around like ... I think it was like 2009 or 2010.

Holly:
Okay yeah. That was around the same time that my own startup ... Well, the startup that I was a part of, I was not the founder. But the startup that I was a part of was also doing remote work. At the time we ... It was so early in that there weren't such common words for it. So we said we were geographically diverse, but yeah, we all worked from our home in different locations and co-working wasn't really a thing. So I very much know what you're talking about. I remember trying to convince ... Like buying video, not all laptops even had video in them by default. So I remember buying, headsets and video and sending them to all team members and being like, "When we get on the call, you must use this because we'll be able to like be together better."

Chris:
That's right.

Holly:
I mean like, "What are you talking about Holly?"

Chris:
[inaudible 00:17:19]. And it actually was interesting because my co-founder was based out of Hong Kong for a little bit at that time, and then out of Japan. And then, I ended up moving to Hong Kong, while I was basically at Waze. And so there was this real issue that even though we were winding down the business, I was still getting phone calls at like effectively 4:00 in the morning, my time because I had been too smart and set up this great like phone forwarding mechanism through local numbers and Skype and everything. I think I was maybe a little bit of an early adopter on the ability to really look like you're from the US, but you're somewhere else for a little while.

Holly:
Yes. Yeah. So they were calling you, they didn't know you weren't there. You're like, "I'm actually picking this up from Hong Kong."

Chris:
Exactly. But yeah, so I mean, after ... It was actually Complete Seating and then Waze and then I end up moving to Hong Kong and started working for a company called Horizon Ventures, which was ... It's a venture capital firm that you probably ... It's a very big one that you've probably never heard of, but they've invested in groups like Waze, DeepMind, Facebook, Twitter, Spotify, those types of people. And so for them, I was director of product working a lot on different education projects, but also helping out with the portfolio company that they had. So like Afectiva, which is a group that does facial analysis of emotion, it's a spin out from MIT labs. It was something that I help try to think what would it mean to productize these types of technology. And so it really gave me, I think a continued taste for this idea of, what does it mean to build, machine learning and AI projects as well. And this was back in like 2013.

Holly:
Yeah. It sounds like you have a trend of being very early in some of the tech adoption curves and yeah, so-

Chris:
Yeah, like when we talk about AI now, like AI is really where mobile was about 10 years ago. So, definitely the emerging tech scene is something I think about a lot because we don't have a lot of the heuristics, the rules that we have now for mobile in the same way. There's definitely an ever-changing skeuomorphism that people are using to bring them to the next version of how software works. But the reality is that when it comes to mobile, we have way more experience, I think building consumer products versus AI. Even the AI consumer products that have been around for a very long time, like fraud detection in credit cards or spam detection in email, people don't really interface with the bot much. It's a magic box to them or it's something that happens in the background. And so the idea of AI systems that are actually showing their intelligence and interacting with people in a way that is meaningful and interpretable was very, very hard a hard thing to do.

Holly:
Yeah, absolutely. So how did you get a deeper into that part? How did you get deeper into the AI machine learning?

Chris:
So the last three years before I joined IPsoft, I was working for Philosophie, which is a design consultancy based out of LA and New York. And so my role originally was director of product strategy, which ... Part strategy within Philosophie was basically product management. It's because it came from an agency world that they call it product strategy. So probably had like every possible product type role title that you could imagine. And I'm adding another one with chief product architect.

Holly:
Uh huh (affirmative). Yeah, you're changing up the role names too, just like the new technology.

Chris:
Exactly, exactly. And so, within Philosophie I would manage a lot of the ... Do actual client work and then set best practice around the data science machine learning and AI projects that we do. And this would be for very large companies like Prudential, PWC, Google, E-Trade, et cetera. And so within those organizations, they usually want to your harness the power of something like machine learning or data science, but they really wouldn't know how. As a design consultancy that would mean that we would try to really consider what are the problems that are worth solving within the organization and then how do we apply some of these more emerging technologies to it?

Chris:
That got included some projects with blockchain and VR. It was really a Bingo card of emerging tech as you could imagine, especially when working with different types of the innovation labs in large organizations, they're constantly trying to disrupt themselves but having a very hard time doing so. And so they tend to have the belief that if they disrupt themselves with new technologies that will bring along the culture change. But the assumptions that they have internally, are actually very difficult to pull apart.

Holly:
Yeah, very much. I've done a bit of that as well. Sometimes I think that the new technology is less threatening than the idea that, this technology's enabled a new business model and when we adopt that new business model, someone is going to have to say that the business model they'd already pitched that was adopted a while ago was ... Maybe it wasn't rock at the time, but it's wrong today.

Chris:
That's right.

Holly:
And that's hard.

Chris:
It's very hard. There's a great paper in a strategic management journal of the mid 80s about dominant logic theory. And dominant logic theory is this concept that, if you're a successful executive, you bring your company up into some level of success, that means you want to open up to new customers, new markets, new problem spaces, things like that. But still the dominant logic that you use is what brought you there in the first place. And the big issue with that is that, it's not necessarily the logic or the mindset or the assumptions that are going to help you be successful in a new market or a new problem space. And so a lot of what we would try to do at Philosophie, we would do things like Crazy Eights, which is a sketching kind of technique to be able to get ideas out of people.

Chris:
One of the things I would ask a lot of the time is for people to sketch bad ideas, and like 50% of the time that bad idea is really bad. But the other 50% of the time, when you look at it, you would consider whether if you relaxed some constraints or assumptions that you have about the world that could actually work. Greg Larkin wrote a book about, This Might Get Me Fired. And he talks a lot about entrepreneurship. One of the things that I've seen him use in some of his workshops is, the question of, what would actually work but get me fired here? Because it ends up being cultural problems that usually hold back people from doing interesting things in large corporations rather than the feasibility of technology or the capability of resources internally. I mean, there's tons of smart people in large companies just they are not to act on a lot of what they do [inaudible 00:23:47].

Holly:
Yeah. So you mentioned Crazy Eights, for anyone who's not familiar with that, tell us a little more about what Crazy Eights is and how you would facilitate that.

Chris:
Yeah, great. Crazy Eights is something I use a lot during divergence, or diverging activity. It's usually meant to pull out a lot of different, what I call idea of fodder within trying to understand, what people could think of as a solution to a particular problem. So usually you'd start with something like how might we, which was really just a framed question on how we might solve a problem. And the reason why these, how might wes, are important is that it's not presupposing a solution, so it's pretty open ended, but it's framed in such a way that you're constraining it in a valuable way. And so from that, how might we, what you'll do is you'll take a sheet of paper, you'll fold into eight sections, and then you get one minute per section to be able to sketch something.

Chris:
And so, we try to push people to draw more than they write, but there's always one person in a group that decides to just write blocks of texts in every one of those sections. But it can be something that is more illustrative of a situation. It could be stick figures interacting with each other, but it could also be UI elements that are specific to a particular view. And so we've done all types of those things. I find this is a great way to get a lot of ideas out from people that are maybe subject matter experts, but it gets them to ... Because you have to think very fast to draw all these things, it means that, you're not getting stuck on your ... You're really breaking out of your assumption set a little bit more, your constraints a little bit more.

Chris:
I do some really weird things with Crazy Eights as well. So, not only asking for one of those panels to actually draw that idea. I've used a lot around card decks to start to randomize each panel as well. And so-

Holly:
Tell me about that. What does that mean?

Chris:
Yeah. What I started thinking about is there's this great deck of cards called the Oblique Strategies by Brian Eno. And Oblique strategies is this deck of cards that ... Brian Eno is a producer that has worked with a lot of different musicians, including like U2. And whenever musicians get blocked in trying to record some type of music, he would always ask them questions or throw out statements or trying to get them to think differently about what they're building to get out of this block that they have. So what he did is he created a whole bunch of these statements and questions that he then turned into a card deck. And so you can buy this card deck online. It's comes in like a full leather box. But what I would do is I would draw a different card for each one of the panels.

Chris:
And so some of the things are very interesting, like make the least obvious thing, the most important. He was always referring to say instruments or to parts of the actual music composition. But in our case it's ... If we were to think about this particular thing now, if we take everything that is small detail and blow it up within this interface, what does that mean to the way people interact with this? There's also like really weird prompts like courage, exclamation point, which is another one where everybody's processing this in their own way to then turn it into something else. And so, I've used the Oblique Strategies an awful lot. There's a deck of cards or sets of cards called Trigger Cards, that are done by Ale in Spain.

Chris:
And so I actually just helped create a trigger deck for machine learning and AI projects. But essentially, the one I use the most right now at least because I don't have the new deck in my hands yet, but is like, is one called User Centric cards. And so these are basically ... And I carry it around my backpack because it's so valuable to me, but these are 60 different questions that if you had time when you were trying to consider something being user centric or centric, you would ask all 60 of these questions. Some of these questions are things like, what if it actually took into account everybody's point of view. And what it does is that you don't have time actually to answer all 60 questions for every decision. But if you draw say five of those questions randomly out of the deck, you've at least considered more than you would have otherwise.

Chris:
I think this really gets to randomness as part of something that I would refer to as an adversarial product management, which is something I've been thinking about an awful lot. And randomness is a key aspect of this. So, I'll use these types of decks of cards to add a lot of randomness to the situation, and part of the reason why, is that because a lot of the time that we're not able to see all the possibilities is because we're biased ourselves. The main issue is that when you are biased, you have something called the blind spot bias, which means that you cannot actually see your own biases. And so the question becomes like, how do you get out of these blockages or these biases? And I've found that using things like randomized decks of cards, things like that I've really helped me.

Holly:
Yeah, I love that. So those are a lot of different techniques that you use in divergence activities. So in places where you're trying to get people to think differently, to push them outside of their norm and to force them to take a different perspective or ... And I know that, I've certainly, I've done some of these ... Sorry, go ahead.

Chris:
I was going to say this works for conversion activities as well too.

Holly:
Okay, cool.

Chris:
Yeah. But I mean the reason why I would say it works for conversion activities is that a lot of the time when we're trying to prioritize things, generally that's a convergence activity. It's prioritizing the items or creating a stack ranking list. What I found is that this randomization also to a certain extent ... It's like whenever I would work with very junior product people, if they start to get paralyzed by decision making, I would ask them to flip a coin. And one of the things that's funny about flipping a coin is that when the coin is in the air, you know what you want to actually do. There's this gut feeling that you have, just for whatever reason, because the decision has been taken out of your hands, you know what you want at that point.

Chris:
One of the things that I've found, especially with randomization, is this idea of when you apply things like randomization to decision making possibilities, or processes, one, you start to really understand what your emotional desire is for a decision. And that's really tied up a lot with your intuition and your gut feel and is incredibly important. There's a canonical story that people that don't have emotions can't make decisions. And so I think like getting out of the way of our own decision making, being emotional still and still being really caring about the qualitative aspects of the context or the value that we're trying to get out of things, I think is really, really important. So, these types of like randomization techniques that I've found useful to do that. I mean, I would actually say that in true decision making science, it's not just about two decisions either.

Chris:
So it's better if you roll a die because there's probably at least six options. It's never just black and white. There's actually a lot of different ways that you can deal with the situation. So I found that to be pretty valuable too.

Holly:
Yeah. I need a little more clarity there because I'm not sure if I understand how ... How do you add some of that randomness while in the converge phase? So if someone let's say working through a prioritization decision ... I like the example of tossing a coin. But what about some of the other things you're talking about? Like do you use the cards and the questions in that phase and what does that look like?

Chris:
The questions work much better in the divergent stage of the role, because it's about asking, have you considered this right? Have you considered this type of thing? And that's really where like when you talk about great critique of particular work, like designers do this through design critiques, coders do this through code reviews. I've really tried to get the practice of doing product critiques with my teams because ... Good product critique is about asking questions. When we talk about what it really means to then do prioritization with randomization, it's more about trying to push you in a direction that maybe feels uncomfortable and then understanding why it feels uncomfortable in some way.

Chris:
This is actually related to a design technique called discursive design or provocative prototyping. So I would much rather that I create a prototype, even if it's a horrible prototype, even if it's what we think people do not want and putting it in front of someone because then it makes it so real to them about what is good or what is that. I think like when we talk about randomization, this idea of just like choosing one of the options randomly gets to that point. Like if I gave you a list of work items that was randomly prioritized, you would look at it and you'd say, this feels wrong. And then I would ask, "Why does this feel wrong?" And that would get you to this point about really understanding what is the motivation behind [inaudible 00:32:31]. That's where I found ... Yeah.

Holly:
It sounds a bit like a technique for probing them and pushing them into getting over analysis paralysis.

Chris:
Absolutely, yeah.

Holly:
Being like, "Just get past that and keep going."

Chris:
That's right. Absolutely.

Holly:
Cool, okay. Well you mentioned in there, adversarial product management and, I know, you and I had chatted briefly about that in the past, and I thought it sounded really interesting, in particular because of how it ties to machine learning when they're trying to figure out what path of the machine it should go, they follow all these systems and algorithms around exploration and making decisions about exploring versus optimizing where you are. I'm sure some of our listeners are not machine learning experts. So can you tell us a little more of that picture? Like what is adversarial product management and what is the science behind why you want to get some random sampling when you're making these decisions?

Chris:
Yes. I think first, for me adversarial product management is really about how do we use contrarian thinking to get to the core of what really matters. And adversarial product management, a lot of when I think about that term set it comes from something called red teaming. The history of red teaming is really back during the Cold War days. The military, we were the blue force because we were America, and whenever, the blue force was going to go out and do training missions, they needed someone to pretend to be the enemy, so that enemy was the red force which we're supposed to be USSR, at the time. And so red teaming became to me the idea of taking a mindset of your adversary or of your enemy, and the reason why you do that is that you will actually be able to understand yourself better.

Chris:
You'll be able to understand how you can defend and how you can maybe take better action against your adversary. Now, when we think about it from the standpoint of product management, my first thought is ... The first thought I think of a lot of people is that this adversarial model is really about your competitors. But the truth is that in today's world, when we talk about strategy, it's not so much about your direct competitors. I mean, maybe you'll have a situation like Uber versus Lyft, but most of the time it's really more about mind share. At least the next step is maybe thinking about it from the standpoint of the mind share within your customer's mind. I found that the most important aspects of competitive analysis or understanding your competitive positioning is really what are your customers say about you and the other people that they think about.

Chris:
But it's not even that. It's not even that your customers necessarily or adversary in this case. When we talk about bias and assumptions, your true adversary is actually your current mindset. Your current mindset is the thing that is going to keep you from actually understanding the world in a better way. And this is very much related to John Boyd and his OODA loop, which is observe, orient, decide, act. It's maybe like proto basically build, measure, learn or agile looping methodology. But it's about this idea that you're constantly trying to understand more about the world. You're correcting the model that you have about the world because you're wrong in some way and then you're figuring out what to do next when doing basically. And so from that perspective, the key aspect of the OODA loop from the Boydian sense is really about the fact that you were wrong in some way.

Chris:
And the best way to do that is to approach the world as if you are wrong. The best way to do that is to take this contrary in mindset in a lot of ways. And so let's talk about the other ways that you can do this. When we talk about adversarial product management, one of the first ways you do that is by hiring a lot of people that have diverse mindsets. That will help get you out of this group think, ideally if you're creating things like psychological safety where people can disagree with each other, they'll actually be able to now work together to create something better. There's more conflict in those types of teams, but they'll actually create more interesting and innovative things.

Holly:
Yeah, it's the good conflict.

Chris:
Exactly.

Holly:
One concept that helps me visualize that is antifragile, things that get stronger when there's friction, the friction makes you stronger instead of weaker.

Chris:
Yeah, absolutely. Friction can help you in a lot of situations because it's not always that everybody wants everything right away. Anyways, I agree with you 100% that like antifragile is like an interesting way to think about that. But you usually want to have teams of about five to 10 people. You don't want to have millions of people on your team, because it just is not functional or it's very hard to get functional as we can see with like modern democracy today, things like that.

Holly:
Yes.

Chris:
But then like, after that, that's why you go off and do user research. As you go off, and you do use a research and talk to your customers because, they have a different viewpoint than you do. And so you're trying to learn about their mindsets. You're trying to learn about the way they work in the world. And that's the next step is how do you actually use your customers as contrarian viewpoints to your understanding of the world. And so from there, how you set up these types of discussions, how you set these types of user interviews should be trying to go after what is surprising to you more than anything. Like just testing ... I mean, it's always good to validate what you have and I think validation is too strong of a word.

Chris:
It's more like you're building confidence to a certain extent. But that's the issue is that you need to be using these customers as a way to understand more about the world, not to just continuously validate what you already know. And so that's the second part of what I call adversarial product management. And then the last part is randomness as we've been talking about it.

Holly:
Uh huh (affirmative). So before we go into the random, a little bit more. So on the validation and the customers as adversarial, I think there's something there that's really key and I want to flush it out a little bit more for our listeners. So, one of the things that makes me think about ... So I'm really big on the scientific method and the principles of scientific experimentation, and one of the elements of that is that, you can't say an experiment is proving something if there was no way for this experiment to disprove it. And I think that's the thing that when we use the word validation and a lot of people use it wrong, is that if you went out to do the research to validate your idea, but there was absolutely no way for that research to invalidate what you were doing, then that wasn't really a validation. Right?

Chris:
That's right.

Holly:
It has to be something where you're actually trying to seek out the thing you didn't know. And I think what you're saying is so right on the money with that because that's actually one of the things that I tell people. Like, if they're saying, "There's all these things that I would want to research and I don't know where to start." I'll usually say, "Well, what's the thing you understand the least well? That's where you should start because that's where you're going to get a ton of value from that conversation."

Chris:
Yeah, this is very much related to Kenneth Stanley who is now head of AI over at Uber. He did some early work around, what is referred to as novelty based search. And so this is where, using the randomness of, in this case, genetic algorithms to ... Something called Picbreeder, which was trying to create interesting pictures, but then using humans to decide which of the pictures were more interesting or more novel. And so this idea of how do you search a space of something that you don't know very much about and what you do is every time you take a step forward in time, you're constantly looking at what you have not done yet. And so when we talk about the idea of unknowns, especially for very early stage things, especially ... I mean, I think surprise is always around the corner for everything, even when we ask questions that we know the answer to potentially.

Chris:
But I think when we talk about very early stage things, it's about what you don't know yet. And so that idea of just doing anything that you don't know about yet is really, really interesting, and then important. I think we had talked about assumption mapping, and so the idea of assumption mapping really gets at the heart of that type of thing. And that's the hard thing I think is like how do you draw out what the assumptions are about the world? It's probably the hardest question you can ask, because if you just ask someone, "Hey, tell me everything you know about this." They're going to give you a 10th of probably what they actually know. And so a lot of time you have to prompt it with different types of questions.

Chris:
You'd have to go out and through the iterative process of talking to people, you're going to learn new things that you don't know. And so I 100% agree that, when we talk about research, when we talk about, doing great work, it's about going after the things that you don't know a lot about and actually the things that ... Clare Gollnick, she's the head of data science over at NS1. But I saw her give a talk that was about the philosophy of data and she really talks a lot about the idea of data being good evidence for something. And evidence is really, really important only when it's surprising, because it's not surprising, it doesn't matter that much. I think I've tried to pull this over to when we talked about qualitative research, for how we do a better job of understanding our customers, not just from a quantitative standpoint.

Chris:
I think that's really important. But you do need to start with some type of feeling or question that really matters. And to me that's where definitely the over-quantization of research right now and there's been a lot of really great talks about that as well. But that's the thing I'm always concerned about is that we're over-quantizing things when the reality is like, it's still is a human being that has to make a final decision. That's usually a product person when it comes to prioritization. Right?

Holly:
Yeah. The decisions have to come from a mix of different types of evidence. Right?

Chris:
That's right.

Holly:
So circling back to the third element of adversarial product management and the randomness, how does that tie in and is there anything else about that part that you wanted to say?

Chris:
Yeah, I mean, I think when we talk about randomness, the real key aspect to randomness is that, it's not just randomness for randomness sake. It's not just that there's like some random number generator that is producing garbage somewhere. But it's the fact that it's this combination of the randomness with your interpretation of the world. I've been known to take things too far when it comes to these types of techniques, but I've started to do for Tarot readings for products. And the reason why-

Holly:
What does that mean? Sorry. Like what?

Chris:
Like basically Tarot readings for product is where we have a deck of Tarot cards. A lot of people will, will look at things like Tarot or I Ching, which are all considered to be spiritual or fortune telling techniques. But if you interpret them a little bit differently that it's not about necessarily the future, it's about the way you're framing the situation. What you can do is you can pull a bunch of random Tarot cards and I can do a reading where I ask you questions about your product that are maybe going to, not only is it randomness of the cards, but the symbols that are on each of those cards. It's my interpretation of those symbols and then it's your interpretation of the questions I ask.

Chris:
And so it's being filtered through two different human beings. The Tarot cards actually don't matter. It could be a deck of these User Centric Trigger cards, it can be really anything. But the key point is that, we're taking something that is our current mindset or a current set of assumptions and then we're asking if we reframe it in a slightly different way, what does that mean about the models? And some of those reframing is maybe bad, like it may not actually make sense, maybe stupid in some way. But they're still valid to do that type of exploration. Now the issue is you don't want to be doing that all the time. Like I probably don't want to be pulling out Tarot cards for every decision you make because you'll just look crazy in a product context.

Holly:
I do have this visual now of you walking out with a backpack full of all sorts of different cards. You'd like, "Here's the User cards, here's the Tarot cards, here's the Oblique cards." [crosstalk 00:44:36]. See there you go, which one is that?

Chris:
This one is the Trigger User Centric card deck that I have. I was originally carrying about eight different card decks with me.

Holly:
I used to be the woman who walked around with all the sticky notes and sharpies. So I would like go to a conference and they'd be like, "Do the [inaudible] with the people sitting next to you." And I'd be the one who's like, "All right, we got the sharpies, the sticky notes. Let's do it." I get it.

Chris:
[inaudible] so crazy about these randomness techniques. But I don't know. I think like the card deck thing is really interesting. I'm very excited to continue to see, like when we talk about the generative aspects of the business of how we actually do our work. I think there's really something interesting around how do we start to build agents that apply randomness that is contextual to us and then allows us to reinterpret that. And so, there's this whole concept around animism in artificial intelligence and how maybe animistic focused AI agents may be better than personified ones. And [inaudible] familiar with animism, but animism is basically this idea that there's like a soul in every object.

Chris:
And so this would allow people to consider that like ... If we talked about Native American population, the wolf in some way or the fox was like a trickster. It didn't mean that necessarily you understood ... It doesn't mean that you're going to like talk to a wolf in the same way necessarily, but it meant that you understood, was able to interpret the behavior that the fox or the wolf actually had, in a way that was understandable to you. And so I think this is going to be one of those big things when we talk about artificial intelligence in the future, is that there's always going to be like little weirdness about the way that they act as a person. But if they act as something that we know is not a person, but they act in a way that we're able to interpret better, I think that's really interesting.

Chris:
And so, I don't know, maybe this is a long way of going about to the concept of, how do we start to integrate more randomness into our work? And I think, some of the early versions of this are writers, for example, they can load up, this like Bayesian, like auto-complete thing that will spit out sentences for them to write, say, like science fiction. Now, it doesn't mean that every sentence that comes out of that as valuable, but at least it might spark them to go in an interesting direction that they wouldn't have otherwise. So I wonder what that means for product. At philosophie we actually experimented with the concept of ... It seemed like every company about a year and a half ago, that was doing something in machine learning was creating like a sketch to HTML interface, machine learning algorithm, Airbnb did one, we did one.

Chris:
But I think the future of that type of thing is that I could describe, that I want to build a website that is, some type of store and it's going to be for like an automotive category. And there's probably a lot of heuristics and tropes that go into those types of sites either as stores or automotive. And so from there, what if I was able to see a couple of different interaction models that could take place for this type of thing and then have it automatically tested, from the usability heuristic model that would say, "Based on 90% of the users in the world, this type of thing will work better than this type of thing." And so I think there's a lot of really interesting tools that will help tighten the loop in how we actually do our work as product people to avoid making regular mistakes that everybody makes, but also expanding what we're considering when it comes to new functionality or new capability.

Holly:
So one thing that just made me think of is, are you familiar with the various iterations of AlphaGo, Alpha Zero, Alpha everything? It just made me think about how some of the breakthroughs that have come in the games that ... So for anyone who doesn't know, these are iterations of machine learning or artificial intelligence coming out as a Google DeepMind that are competing against the world's top players in various types of games. It went from originally chess, then became Go, recently they beat some of the world's top players in StarCraft, which is a very complicated thing to be doing.

Chris:
I think poker people would use not the same mechanism, but they're gearing up for that as well.

Holly:
Yeah. So they're attacking all sorts of new different types of games and ... Not new games, but new games for this technology. And one of the things that's come out of it is that now or I guess probably at this one a couple of years into Go, being something that machines can do better than humans at and now Go players are learning new ways to play Go. And I think it was the same thing for chess. They're learning this from the machine. And I mean Go is like what? Thousands of years old, I think, like it's really, really old game for the human race. And there's parts of how to play that, that humans just never broke out of the particular direction they were going. But the machine had this randomness in it that it was testing out these different areas and it came away with things that the human announcers watching it are like, "Oh my God, that's a horrible move." And then they end up winning the game.

Chris:
That's right. If you actually look at a lot of these games, either the way that say AlphaGo will play chess ... Sorry, Alpha Zero will play chess or AlphaGo, will play it Go, after they see these moves people really refer to them as almost like beautiful. They'll refer to them as alien as well. And the way that these types of systems are actually trained is by playing against itself. The thing that's interesting about that too is that there are a set known there ... There's a set of known rules in go and chess, and so that's generally how these systems are built. And so that's why it's very easy within a simulation environment that's low cost for it to play against itself, hundreds of millions and billions of times to be able to build up this heuristic on what is actually the best move right in this case?

Chris:
And so, I think this is very interesting because what this is an awful lot of about is that, there's a type of machine learning algorithm called the GAN, or a generative adversarial network. This is where all the pictures that look super lifelike of non-real people, comes out of that. And so the generative aspect of it is trying to create a picture that looks like another picture of a person and fool something which is this like adversarial model that's trying to judge if it's real or right. I think this gets back to like the idea of adversarial thinking, especially in feedback loops. I would say that everything that has been done that is worthwhile and great usually required some type of pushback or adversarial-like relationships with someone else. Like some of the best artists, they're fighting with themselves all the time.

Chris:
I think how we start to create better loops, not only with other people within our team, like, so how do we do more critiques? Which is I think a key aspect of that type of thing, but also how do we use other alien types of things that we wouldn't have thought of otherwise? And so, this is like a really big benefit to the future of creative work as well. When we get back to say the way that my dad used to work as a graphic designer. He would try to come up with a lot of different ideas and part of his role was to create more uncertainty or ambiguity in the understanding of the problem. And I think that's something that we need to continuously do, especially for things that are important or interesting to actually build today, there should be actually less certainty and it should be less focus on the right thing to do because that's only going to get us so far.

Chris:
And when we start to talk about how we tackle incredibly complex problems, especially between multiple people, especially when we talk about at a mass scale of like the world, solutions that are incredibly directed or deterministic, have certain types of effects or immersion behavior that we just won't be able to understand them. I don't know, this is like a really ... This has been a big wake up call for me over the last couple years to see this type of adversarial modeling, especially with the machine learning, but also I started to see this as like patterns in almost every other industry, as well. And so, I don't know, that's why it's been so important to me to think about this stuff.

Holly:
Yeah, super fascinating. There's so much I think as you think through how this applies. I'm definitely interested. I hope some of the people who listened to this will have some different ideas about ways they apply this to their life and they can share with us what it means for them, but this has been really fascinating. I know we're almost out of time. My favorite way to end is to ask if you have any top advice, sort of your number one message for a products leaders or startup founders who are trying to build awesome products that they can take and put into action soon.

Chris:
Yeah. I'm not going to say get comfortable with being uncomfortable because I think a previous person on your podcast already said that.

Holly:
Yes.

Chris:
[inaudible 00:53:24]. I would say that we need ... One of the most interesting aspects of the world is how we actually deal with ambiguity. And so from a product perspective or from a founder perspective, your job is to really be an ambiguity leader for the rest of your team. And so, it's not that necessarily you're going to make ambiguity disappear from the world, but what you need to do is you need to try to make ambiguity understandable and comforting to people on your team. Because ambiguity is good, uncertainty is good. It allows you to do things that are more interesting and more important. But that's the biggest issue is like how do you actually make it so that people can find certainty in certain things about what they're going to do so that they can move forward and execute? I don't know that, that's the thing I'm constantly thinking about for my teams is how do I build alignment around what we're doing? And then how do I help them interpret the uncertainty of the world in a way that is helpful.

Holly:
Yeah. That's awesome. It actually works super well for a conversation I very recently had where someone was asking me, "I don't know if I need a product leader, but here's the problem I'm having." And it was basically like, things are too ambiguous and we don't know what levers to pull, and we don't have clarity. And I was like, "You need a product leader." And so that's exactly it. That's awesome. So yeah, help people get comfortable with that ambiguity and know what to do with it and know how to use it for strength and know where they're going. Awesome.

Chris:
Exactly.

Holly:
Well, how can people find you, Chris, if they'd like to learn more, as your adversarial product management develops, where they can follow along.

Chris:
Yeah. So, I'm Chrizbot with a Z on the end in Twitter, that's a long story, but you can also find me on LinkedIn and I have a bunch of talks online about sematic road-mapping, adversarial product management, artificial intelligence for product management. And so you can find all those things on YouTube as well.

Holly:
Wonderful. Awesome. Well, thank you so much for being a guest. It was great and I'm excited to share this with the world.

Chris:
Thank you so much.

Holly:
Product Science Podcast was brought to you by H2R Product Science. We teach startup founders and products leaders how to use the Product Science method to discover the strongest product opportunities and lay the foundations for high growth products, teams, and businesses. Learn more at H2rproductscience.com. Enjoying this episode? Don't forget to subscribe so you don't miss next week's episode. I also encourage you to visit us at Productsciencepodcast.com to sign up for more information and resources from me and our guests. If you love the show, writing a review would be greatly appreciated. Thank you.

More Posts