Customers Speak: Laying the Foundation for Omnichannel AI at Scale
- 0.5
- 1
- 1.25
- 1.5
- 1.75
- 2
Clay Hausmann: Hi, I'm Clay Hausmann, host of Contextual Intelligence, and we're here today with a special episode and a unique format for us. And that's because we recently hosted what we called the Aktana Masterclass around how to implement and scale AI within your commercial or medical operations of a life sciences company. We had a number of different thought leaders and customers who participated in the event, and one of the most interesting sessions that I found was a panel discussion that was led by our esteemed head of markets, Alan Kalton, and Alan is here with me today. Welcome Alan.
Alan Kalton: Thanks so much Clay. Great to be here. And yes, it was a really exciting session. We had Paul Thompson who heads up Field Force AI enablement at Novartis. Mike Soper joined us, he leads global field capabilities and excellence at Biogen, and we also had Youssef Idelcaid, who heads up data science for Genentech. It was a really interesting session.
Clay Hausmann: Alan, I had the privilege of listening to it live, there were a lot of really helpful insights from the diversity of our speakers, as you just mentioned, but I find that the panel moderator often has the best front row seat. And I'm curious, what were some of the most interesting takeaways that you had from leading the conversation?
Alan Kalton: Well, for three gentlemen that have been through a similar- sounding journey, they had some really unique experience that they shared with the panel. We learned a lot about the critical skills that they needed to evolve in their teams to be successful. They talked about how they define success and managed the expectations of their organizations, and they talked about the journey and shared some of their own personal advice on how they became successful at doing this type of work. And we were very blessed at the end to hear a little bit about their ambition for AI in the future of commercial execution.
Clay Hausmann: That's great. Well, it really was an insightful and helpful conversation, so let's not take any more time setting up the conversation, let's get straight to it. And without any further ado, here is this session at the masterclass that Alan moderated with our customers.
Alan Kalton: It's been a really exciting session so far, and I hope you've all taken away a few nuggets that can help you find success in your own programs. Now, we've touched on a number of different aspects of bringing together AI at scale. We've looked at how to get started, and even if your data isn't perfect, we've looked at how to design efficiencies that make it easier to scale, and also we've looked at best practices for managing global programs, and we've even covered some of the technical aspects around creating the right AI framework that works with the unique datasets that we need for in local markets. But we now have an opportunity to engage with our guest speakers, and with the addition of Mike Soper once again, and really pressure test some of these important aspects of the strategies we've covered. All right, let's get started. First question, data science by definition relies on data to deliver value. And we heard from Youssef in the first session around the importance of understanding and using data in the right way. The question for Youssef ready to kick off today is, can artificial intelligence in commercial execution be used to not just work with incomplete or inaccurate data, but also be set up to improve the quality of data over time? Over to you, Youssef.
Youssef Idelcaid: Thank you Alan, and nice to see you again, Paul and Mike. Very good question. I think yes, and I would name it actually. It's called feedback loop. What does it mean, really? Instead of making an AI system, or just a simple machine learning model being just like in a consumption mode of that data, whatever the data you enter in that system, make it instead interact with the end user. The advocate of giving power back to humans, right? In the process of building trust. And it's not just building trust, but basically like what I have seen before, by creating that feedback loop, we improve the quality of the data we have in the first layer, which is the data ingestion and model training. That's number one. I think feedback loop is definitely, through AI and machine learning processes, creates that opportunity to give back and improve the quality of the data. The second one is the use of machine learning in what I call data administration, right? Like we have seen a lot of data in the commercial feed, for example, coming from audios, right? It could be as simple as creating frameworks for pre- processing or processing call logs by using speech to text or something like that. Creating the opportunity to structure unstructured data, even if it's not perfect, it's fine, but it creates an opportunity for machine learning, especially recommender systems in commercial side, to get the models to a better state. There are other opportunities of course, but this is like more in relation to marketing. And we start seeing, it's at least at Genentech, with Voice of Customer. Pharma and healthcare, by definition, it's an industry with a lot of unstructured data, and I think natural language processing and understanding more importantly and more specifically can also help create a set of data that is usable. It's not just a cost center. Again, I don't want to be just acquiring data and creating data systems without making a value out of them, and I think machine learning helps get the data to a level of pre- processing or processing that is ready to be used. Again, when I say a good level or a good quality of data, it doesn't mean that you have a huge volume or whatever, like the buzzword, those big data. Instead, I think machine learning can definitely bring us from big data to smart data that is useful to the goal we are trying to achieve. And commercially, I think it's a good way to start.
Alan Kalton: That's fantastic, Youssef. Thank you. And I think that sounds like it's got the next series presentation already baked into it. Really inaudible.
Youssef Idelcaid: inaudible.
Alan Kalton: Fantastic. All right. Well, let's go to the next question then. I've got one here. When you're looking at deploying new capability, you want to do that quickly and that's an essential part of maximizing time to value, but really asking the panel here, what are some of your tips and best practices that you've evolved that balance that need to evolve and minimize that time to value, but also maximize end user engagement and then make sure the organization is prepared? Perhaps maybe we could go to Mike first on this one.
Mike Soper: Thanks, Alan. I think we've heard a lot today from the previous session, some of these tips already, and it really starts with the whole approach to the program. And certainly in our case, we are really trying to live up to the ethos of build once and deploy many, which means you need to do a lot of work before you get to the markets. It's very hard to deploy a large scale program if it takes months and lots of people on a local level, so I think the more that can be done upfront, and this is where the idea, I think, of the global use case library, then the better. But I also just, one of the comments from the session with Veronica and inaudible at the beginning is something that really resonated with me, and that is about standardization. Because we can have a library, we can have a framework, we can deploy quickly, but all it's going to do, all it's going to take is to have one or two unexpected things happen or appear or be suggested because some of the data, some of the inputs are not aligned, to really undermine some of the efforts that we've put in at the start. I think that's a really crucial thing to be thinking about right at the very start of the program like this. And then the last point I would add to that is have a look at what tools, delivery platforms you've already got. In our case, we already have a capability that we already have deployed. It will save us a huge amount of time in terms of training on a new tool. We are evolving or enhancing an existing delivery tool for our field, with the capability of next best action. I think there's opportunities to do that. I know a lot of companies make some of the use of the Viva delivery tools. There's opportunities to integrate into something that you've already done, which will only help your adoption when you come to rolling it out.
Alan Kalton: That's wonderful. Thanks so much, Mike. And maybe I could ask Paul to maybe make a few comments as well.
Paul Thompson: Very little to build on, you've covered it so well, Mike. I would echo the preparation and the standardization. I think we spent quite a lot of time, once we'd done it a few times, almost building a playbook and making sure that we were able to repeat and learn. And I think once we'd done it five or six times, we were almost doing it in a third of the time that we were originally, so definitely that. I think getting the right people into the change program and into your center of expertise is also critical, and having a sort of almost a succession plan of talent for those that are going to bring this into their organization, because that's really key. And best practice sharing. I mean, great ideas could emerge all over the place and the quicker you can grab hold of them and share them and leverage them the better, so think about how you can get forums for best practice sharing in place, too.
Alan Kalton: Fantastic. Thanks so much, Paul, I think maybe building on that, we've got a question around how you introduce AI into the commercial execution, and in fact, that's a journey, which is something I think all of the presenters today have referenced. And well, how do you manage that at a global level where you are actively creating and managing expectations from a diverse set of stakeholders at the affiliate level, at the end consumer level, with the representatives and marketers, etc.? They're involved. And what sort of communications, maybe building on what you just said, Paul, have helped you to keep everybody aligned and moving in the right direction?
Paul Thompson: I think have a good story as a starting point. As Mike was just saying, you're trying to connect often into a lot of things that already exist, and what you want to show is how next best is building into a collected change in a way of working. If you can paint the vision of what it'll be like to stand at the end of the journey and look back with all of this connected, then it becomes a lot less confusing to people, and often a lot less scary, in inverted commas. There's another need to address any concerns that people have, is AI replacing my job? Is this just another cost cutting mechanism dressed up in fancy terms? You have to face that kind of thing head on and be able to articulate that no, it's not, this is genuinely responding to a need to service our customers differently and to use technology and data to do that. And I think engaging at all levels, clearly there needs to be a lot of senior stakeholder management. You need to get the senior folk talking about your program and endorsing it, and giving it that blessing, but all the way down through to engaging with different associates and bringing them into the program. And peer- to- peer influence is one of the biggest ways of getting momentum behind something, I think.
Alan Kalton: Fantastic. Thank you, Paul. I guess when you're embarking upon a program that really is about behavioral change and about organizational transformation, making sure that alignment is maintained is a... Seems like it's a full- time job, so fantastic to get your insights there. Let's have a look at another question here. We've got three wonderful experts around the table. Based on your personal experience, what is one piece of advice that you would convey to somebody just about to embark on this type of journey for the first time? And maybe I throw that out to see who might have some advice they'd like to share first.
Paul Thompson: Happy to go. Something very sort of, I would say start small, learn fast, keep smiling. As my former boss used to say," You can disarm an awful lot of complexity with a smile." This isn't easy, so that's all I'd say. Just come at it with an agile mindset and that helps a lot.
Alan Kalton: Fantastic. Thanks Paul. And maybe Youssef, do you have anything to add to that?
Youssef Idelcaid: Yeah, absolutely. I would add to Paul, believe, right? When you believe in the journey, especially as leaders, you don't need to spend your time trying to convince about this, right? We all know that this is happening, right? You need to be a believer because the journey is so, so long, it can be so complex. It's not a software that you buy, it's not just like an additional head count with certain expertise you add. It becomes very spiritual, so you need to believe in it, and the rest is just history. Of course, you need to invest. You need to get people around and paint the picture, have a inaudible point, start small, dream big. There is no harm of dreaming. I guess in one of my podcasts I have mentioned that, and I would keep mentioning it. Really have big plans for your teams, or your organization, for your leadership, but start small and fail fast. Give yourself permission to fail. That experimentation mindset will get you to where you want to be, but yet you had to identify and set the destination in your GPS, otherwise you go nowhere. That's my piece of advice.
Alan Kalton: Fantastic.
Mike Soper: I would also say be realistic. It's great to have all these wonderful ideas and a roadmap, but when you are starting out, AI, as Youssef just said, has been around an awful long time, but it still seems to be the buzzword, but you don't have to jump straight into that. There's a lot of value added things we can do just with some simple rules that will make a huge difference to being in a field, so be realistic and be transparent that it's not going to all come on day one, and it is a journey that you're starting.
Alan Kalton: Fantastic. Thank you, Mike. Thanks Paul, and thanks Youssef. Great words of wisdom for people about to embark on the journey. Let's switch to another one, which is a very interesting one, which is about how you define success for this type of program, and what would you recommend the industry adopt as a benchmark for success? And perhaps I know Mike, that's a passion of yours, so I think I'm going to go to you first on that point.
Mike Soper: Yeah, I think the answer is it depends on the life cycle of where you are in terms of this sort of program. In the early days, of course, I think it's important to be focusing on things like adoption and not just usage, but actual adoption. Are we actually following what we are suggesting to do, or what the capability is suggesting to do? And then I think as you move through that life cycle, your measure of success is going to start focusing more on a business impact, so are we actually having an impact with customers? Are we getting across the right message at the right time on the right channel? Are we aligning our content? It's not just about the channel, but are we also aligning to the preferences that we know that our customers have? What do they want to hear about? And then ultimately, I think you can start looking at things like NPS scores. Are we actually seeing a shift in perceptions of us, or of our field from our customers, and that you can clearly see there's a shift, comparing both pre- solution and post- solution? You can really sort of nail that down and anchor that towards the program that you've rolled out, because I think in absence of HCP level data in the majority of the countries we're working, it's very hard to really measure field impact, so you need to look at these secondary measures. I would be very clear on where are you in the life cycle of the whole program, and how did we perform on that particular measure before and after? And even if that means you do some sort of A/ B testing as you go along, I think that that's really key to be looking at.
Alan Kalton: Fantastic. Thank you, Mike. Really important to understand where you are on the journey and define your success criteria based on where you are. Great to hear that. Maybe I could ask Youssef to maybe make a few comments as well on this topic, if I may.
Youssef Idelcaid: Absolutely. And I liked how Mike laid it down. We tend to define success criteria as a fixed thing. It's a moving target, right? I think it has to do also with this notion of starting small. Your goal might be as simple as automating, for example, 10% of your process using AI and ML, so I think that's just enough. One of the really important thing in engagement for example, is logging the calls. Of course, there is incentive that goes with that, but like from AI standpoint, there is the cost that goes with processing and mining these logs, and I guess success would be as simple as yeah, just processing X amount or X percent of that data in order to make incremental improvement in generating content for the rep. I think that's an example. I believe also that it is great, I think, when defining success to make part of it a little bit formal, because at the leadership level we tend to define these kind of goals and KPIs, but when it comes to execution, I think we need to create an incentivized environment for our people to make sure that if they are part of the journey, we make sure that we reduce, for example, some of the workload they are having in a traditional day to day or business as usual. Because not everyone will make definitely the journey, we know that, by definition, but at least when we embark, I think we should be super... How can I say? Thoughtful when it comes to defining success also for our people, because they need to be clear about what they're delivering and what they're being incentivized for. And that's by itself a big success criteria, because if you fail getting that level of transparency and visibility for people who are executing your AI vision, then I think it doesn't matter whatever you are building. I think it's going to be just a failure because you don't have the team to support that.
Alan Kalton: Fantastic. Thank you. Let's have a look at a question on skills and capabilities. You guys are veterans, you've been through this and you've looked at the way the organization was before, and now what the organization needs today in order to be successful. And what are some of those critical skills and capabilities that you've needed to really inject into the program that you didn't have before that you now see as the currency that drives success in your programs? And maybe I'll direct that to Youssef first, if I may.
Youssef Idelcaid: Thank you. Thank you, Alan. For me, it's a current question because I'm in the middle of this. I joined Genentech and I think I've mentioned that also in the podcast, so basically lot of talents and the rock stars. Skills, I think yeah, they're hard to get, but when you have talents, you need to cherish them, but I think I would add to that skills question also, what is the operating model you put in place in order to make this a success, right? In terms of skills, I think if you asked me 10 years ago, or like even 13 years or 15 years ago, like what are the skills? I would tell you, okay, get statisticians, data scientists, like just the techy kind of... Right? Get the best developer in the world you can. But we all understand that now that piece of code, it's super tiny in the process. Instead, I think the skills that I came across, and in my journey of AI in general, but especially now that I'm in biotech and pharma, is roles like having good translators, right? Because I work with my team day to day with the early adopters from customer engagement department, for example, and organizations. So people who are in the field, marketing and digital space, you need to have translators, people who understand enough what that AI mean, or what does that AI mean for them. But they bring sort of subject matter expertise in order to activate and enable whatever you are building. And these are usually very couple folks in the organization that you need to identify working with the leadership, because it has to be also a win- win situation, because you're going to use them for X amount of time and they're going to spend time with you and your teams. I think yeah, that scale of translation and enablement is key. You need someone who can enable whatever solution you implement and roll out to the rest of the organization. And you can have multiple of them, right? For example, I give you an example very quickly. For me, for segmentation or targeting capability, I would need someone who speaks to marketing and targeting people, someone who understands that and someone who understand AI as well, so for me, that skill is very important. There is another skill basically around technology itself. You need to have tech leads who are not just IT and infrastructure- focused, but people who are enabled to enable your solution to the rest of the organization. When it comes to compute, for example, is very important aspect of what you are trying to do, so that skill, it's not an easy one because we tend to use IT as a support, but basically translating that or redefining the role of technologists, it's key. Another role for me that really introduced is you need to have someone who oversees the product you're building. The AI product is very important, right? Not, again, consider AI, that piece of code that you create as a standalone thing, and you try to sell it. You go in front of your field directors and you open your notebook, so okay, here is a nice code guys, I have super good accuracy. They don't care about that. Instead, if you make it part of a product build... I hired, by the way, my first machine learning product manager Genentech, and it's very successful because this person oversees with the skills, he's a sort of unicorn, the product of next best action in collaboration with Aktana, because then... And by the way, this kind of operating model that we have built also with Aktana, it's working, because basically it puts all the skills together, technical, technology, and also enablement in order to deliver. And commercial was a good example because we have delivered a lot so far, and I think with that success, we are rolling out other things, but to me, get people who can translate what you are doing. That's very important. And then make sure that you are not using IT as a cost center, but you are using them as a partner. And I think titles like tech lead is very key. The rest you can find everywhere, not to minimize data science work or coding, but I think it's very important to have governance. The product management and motion, it doesn't have to be just something that Facebook and Google have. I think we can benefit from that in our organizations as well. I'm happy to hear from Paul and Mike what they think about this stage in the company.
Alan Kalton: Fantastic. Maybe to Paul.
Paul Thompson: Yeah, it was interesting listening to you speak, Youssef. I think we had similar sort of gaps, to be honest. I guess the first thing we would say is it's about bringing together a team of people with quite a number of different skillsets, if you want to successfully both build and then deploy something like this. But the concept of running as a product and the idea of then working in an agile way, I think that is definitely a journey that we are on, that's in its infancy. Because it tends not to fit very well with the classic sort of budgetary cycle and other governance processes that are set up within organizations, so one of the things that we've certainly had to try and build is the skill to work as an agile product team within a more conventional setup, if you like, because that really is critical, I think, to driving this. The translation bit, I would absolutely echo. I think one of the big values we found with working with Aktana actually is that you do have a lot of the skills in that translating from business to technical and vice versa. The number of times I've seen a session in which a data scientist sits down alongside a brand lead and they could almost be talking in two different languages. It just doesn't result in deliverable conclusion, so I think that's critical. Business process ownership is another space I think is often quite lacking. I don't know if it's the same in all organizations, but in ours, the commercial organization is perhaps not the first to have business process well defined and clearly owned as a end- to- end thing. And again, what we're trying to do is to put a product to a commercial model that has to sit within business processes. If that definition isn't there, that ownership isn't there, that can also lead to challenges.
Alan Kalton: Fantastic, Paul. Any thoughts from you, Mike, as well on this one?
Mike Soper: I guess I echo the translation piece, and I'm thinking here mostly, or a lot so in the eyes of the guys in the field. Having been in the field myself to really truly understand the what's in it for me. I mean the same goes for any project, doesn't it? I mean, you've got to be clear on the what's in it for me, and what's it actually mean? I think it's all very good and well to have these awesome experts like Youssef and your team and the team that you're describing, but in the end, the person receiving the end of all that great work has still got to understand, well, where's it come from? Why has it come from, and why should I take any notice of it?
Alan Kalton: Fantastic. Thank you. Very robust and rich answer to that relatively simple question, but it does talk about the level of transformation involved and the types of new skills and not just technical, which I think is a really important message. In the interest of time, we're going to conclude with one final question, which is going to be open to all of our panelists, because we've learned about your experience, you've learned about what's got you here and given some great insights into the ways you've garnered that success you've achieved. Love to get your view on what you see in the next three to five years. Where do you really think AI can go in commercial execution, and what do you think the potential is? And where would you like to see it go? Perhaps we'll go to Mike Soper.
Mike Soper: Yeah. Where do I start? I've got some ideas what I would like to see in a future, whether it'll happen and... I really relate to the whole Netflix model. I watch Netflix a lot. It tells me things that I'm going to enjoy watching before I've even heard of it, it knows so much about me, so I really want to see in the next three to five years, how can AI in our pharma space bring that level of intelligence about our HCPs to the guys in the field? Keeping suggestions for content relevant and fresh, even if it's not, what is the HCP looking at? What is a customer's voice? What other NLP technologies can we leverage to really understand the voice of the customer and be provided insights that are just going to resonate every time, every single time we have a touchpoint with the customer? And the other one is more of a... It's more of a practical thing. I I really, in the next three to five years, want to see the end of a traditional core plan. Now, this is so old school now in my eyes, a fixed time period that we anticipate some goals, when actually you think about all the data and processes and tools we've now got, there's no reason why we shouldn't have a fluid plan that is constantly evolving based on what our customers are doing and seeing, and they're experiencing at the time.
Alan Kalton: Fantastic, Mike. Great insights into the future. I hope it comes to pass.
Mike Soper: I hope so too.
Alan Kalton: Youssef, do you want to add some of your thoughts in there?
Youssef Idelcaid: Thank you so much, Mike. I liked how you centered that in the beginning for the field, because I think after all, yeah, we do this, at least like in my organization, to make inaudible life easier. For me, if I have to put a Christmas list, I'm thinking of this achieving customer 360, right? I think it has to do with three components here, like the getting all the data. Again, not necessarily in a good quality or perfect quality, but at least creating that environment where we have that 360 about our customers, whether it be patients or HCPs. And I think using machine learning and AI can help mine that data, actually, because we think about data engineering as using basic legacy things like with SQL and so on, but machine learning today can automate a lot of that and can help us process more data and mine more data in order for it to be ready to be trained, and also when it comes to data analytics and AI, meaning what are the elements of AI that makes sense in the landscape of pharma? Because you cannot think about all the AI kind of suite and just apply whatever you want, and I think there are specific type of models and theories that you need to invest on. For example, natural language processing, it's inevitable that everyone has to have a capability, their classifications and best segmentations, targeting capabilities and machine learning. I think that's also something that people need to really invest effort and time on, because we all know that for example, recommendation wise, one model doesn't fit all. You need to create personas using machine learning of your customers. If it's not done, it's not going to take you anywhere. What I see, for example, this going also, it's creating AI- driven approaches, but with the lens of digital, what I call AI- driven digital first. Meaning, we all know, especially in commercial sites and marketing, having a very strong digital platforms likes inaudible and so on, so forth and so on, so I think it's very important to make AI integrated and part of that digital strategy, right? I always say that digital is on top of everything, and then you have AI, and then you have ML, and can create this hierarchy, right? It's very important to have this three, five year roadmap with the lens of I'm moving my AI capability, but I'm moving it also in congruence with my digital capabilities. Because you don't want to adopt 1, 000s of portals and digital platforms that just kind of over your internal, and also external users, so there's this sort of rationalization that needs to happen as organizations grow. And that takes courage, it takes a village when it comes to prioritization, for example, start and stop. Start and stop has to be an exercise of every year, or even every half year to make sure that we don't miss the three, five year target. So yeah, these are like high level things. I'm very curious to hear what Paul has for that inaudible.
Paul Thompson: There is so much more opportunity in pharma, I think. If you look at us in comparison to many industries, we've only really taken baby steps really, in terms of how we leverage AI and machine learning. Your example, Mike, of Netflix is a classic example, isn't it? That shows well how differently some industries are using it. I think one thing that really... This is strange to say it, but kind of energized this space for me was the COVID period where suddenly the way that we needed to engage our customers changed so much by necessity, rather than by any other reason. It's opened people's eyes to how you can work in a much more multichannel way, and I think what I would like AI to be driving in the few years time is if you start working in that multi- channel way, then you need AI to drive the trade offs, to understand how you optimize your mix to get the return on investment and where you focus, or how you expand your impact and focus your valuable resources into the areas they're going to have most impact. You can't do that without AI, I don't think. You can't do that without really seeing the trends in the data. And unstructured data is another huge opportunity. I mean, the opportunity we have to mine the Voice of Customer coming back through customer surveys, to mine call notes. We've started to do some of that with medical insights, but there's such a long way you can go there. And again, AI is the only way to do that. You just cannot do that without applying it, so I think in three to five years, I would hope that we've suddenly moved as an industry where AI isn't a buzzword. It's actually just baked into the mainstream of our go- to- market strategies, because it's so essential.
Alan Kalton: Fantastic. What a lovely quote to end the panel conversation. AI is essential in the future. Thank you, Paul, thank you, Youssef, and thank you, Mike, for a very engaging panel conversation and some perfectly positioned and very candid responses to the questions today. I'd also like to extend a very sincere thank you to our attendees for their thoughtful engagement and for prompting these questions. And if you do have any follow up questions, please don't hesitate to reach out, we'd love to continue the conversation, so thank you very much and officially, goodbye.
Clay Hausmann: That's it for this special episode of Contextual Intelligence. If you enjoyed this discussion, and our sincere thanks to Youssef, Mike and Paul for their participation in that session, as well as all of our other customers. If you'd like to hear some of those other sessions from the masterclass, you can find them in the resources section of the Aktana website or right here in the episode description on this page. Thanks very much for listening and we'll see you next time.
DESCRIPTION
“What are other people doing?” It’s the biggest question at the start of any major project—and omnichannel is no exception. In this special episode of Contextual Intelligence, industry leaders from Biogen, Genentech and Novartis draw from their firsthand experience deploying AI programs to share what they’ve learned in a frank roundtable discussion from Aktana’s recent Masterclass event. Want to learn more? You can watch every session from Aktana's “Build Once, Deploy Many: Laying the Foundation for Omnichannel AI at Scale” masterclass HERE.
Today's Host
Clay Hausmann
Today's Guests

Mike Soper

Youssef Idelcaid
Paul Thompson
