A methodology developed by Lucia Komljen, former Insight and Innovation Strategy Director at Telefonica, working with C Space 2015-2021.
Lucia Komljen, former Insight and Innovation Strategy Director at Telefonica
The technology industry is slowly starting to accept that it needs to take a more responsible approach to innovation (and arguably, business at large). More of an effort is needed to anticipate and mitigate the unintended consequences their innovations have on culture and society — or better still, to avoid them altogether. On its path to redemption, the industry is increasingly looking towards the humanities and social sciences in order to make sense of the past, present and future of our connected societies. And while the holistic viewpoints they facilitate are crucial (and must therefore continue to be embraced), it still feels like there is a voice missing at the table. A voice that ultimately belongs to those at the receiving end of what we make.
Typically, as per widely adopted practices like Design Thinking or the Lean Start Up Methodology, humans tend not to feature in the innovation process until the product development phase. At this point, referred to and treated as “users”, “customers” or the “addressable market”, they are interviewed and observed around pains and problems in relation to categories or occasions of interest. Further down the line, they may be involved in concept co-creation, hypotheses validation, usability design, and / or A/B testing solutions.
These methods are by no means redundant — they still make sense to utilise, albeit only once a clear vision rooted in strong insights is in place to guide their application. However, prior to product development, it is up to a small cast, made up predominantly of technologists, to formulate this vision. More often than not, said vision is rooted solely in technological possibility (give or take commercial needs in the context of corporate innovation). And herein lies the deep flaw, for the things that excite technologists about technology don’t necessarily excite people — and what people want, technologists don’t always know.
For example, the technologists that made Alexa have been widely quoted saying that they made her in the likeness of the Star Trek computer. In Telefonica Innovation’s research project that explored the role people hope AI assistants will play in their lives in the future, our subjects overwhelmingly indicated it is akin to that of a mother, who cares for them and helps them make better life decisions (“It would take care of me.” / I could be less connected as they would be taking care of everything for me” / “It would help me make better decisions about what I do or eat” / “When I’m feeling depressed, and isolating myself, it would motivate me and give me ideas for meet-ups in my area”).
Messy, Not Binary
Suffice to say, life, at large is messy. Human needs, behaviours, and attitudes are contradictory, nuanced and fluid — never conveniently binary. Life is enriched through belonging, culture, experiences, knowing new things, simple pleasures, doing a job well, keeping in good health, a sense of purpose — not merely by efficiencies when it comes getting things done. Not every issue begs to be solved, and not every bit of friction resolved. Furthermore, knowledge of how technology truly works, let alone what is coming down the line, is overwhelmingly sparse. Typically, when people do engage with topics related to emerging technologies’ potential, awe and doubt feature in equal measures.
In order to innovate more responsibly — though no less originally — it is therefore essential to complement technical possibility with a deeper understanding of human truths at the start of the innovation process. To look outside-in, earlier.
Towards that end, over the last five years, Telefonica Innovation and our partners at C Space have been developing a methodology that makes human insight the centre of gravity in innovation strategy. Instead of searching for ‘problems to solve’ today, our methodology, which we call A Human Core, explores people’s future expectations of technology — and anticipates the impact (positive, neutral and negative) that it could have on their lives. We have mapped it out below, in tandem with some case studies from our project work and three reasons for contemplating its application.
A Human Core is made up of a sequence of projective, experimental research techniques that help us examine deep-held needs, desires, hopes, fears, attitudes and emerging behaviours around technologies we’re interested in doing things with — before we know what that is yet.
The techniques include:
Learning from advanced markets that have already embraced technologies of interest, by way of ethnography, expert interviews and experiencing the places these respective parties like to spend time in.
Behavioural experiments that push people to extremes, in order to anticipate the direct impact a technology could have on their lives or what their deepest needs for it might look like.
Translating the language of technology into a language people understand, and using real-world examples, imagined (even provocative) concepts and discursive techniques to help them make sense of what it could do for them.
Giving people design briefs to create something with the technologies they are now skilled in and knowledgeable about, in order to reveal deeply-held needs and desires that they are often unable to express (or even aware of). These apply to both, the function and the engine of a product or service.
In the aftermath of translating field data into insights, quantifying these sense- checks them against the mindset, attitudes and behaviours of a broader population (typically in the form of surveys).
1. Time Travel: 5G
While working on a network innovation project that tried to anticipate what people might want from 5G in the future, we travelled to South Korea to try and understand what life is like when it’s powered by the world’s fastest and most ubiquitous networks.
Looking beyond the marvel of streaming live baseball games on the subway system in HD for hours, our key take-away was that there is actually a profound tension at the heart of next generation of connectivity. On the one hand, powerful and abundant connectivity leads to an increase in consuming new kinds of content experiences and having expressive, media-rich communications — both of which seemingly delight the customer, require more data usage, and are therefore, (in theory) good for business. But on the other hand, it can result in a decrease in focus, the ability to be present with others, as well as physical and mental exhaustion. South Koreans even have a colloquialism that socially stigmatizes excessive screen absorption in public, namely “headsdown”. In turn, we knew our approach to network innovation had to meet future connectivity needs, as well as mitigate risks inherent in its overuse or misuse.
2. Experiment: AI x Video
When Telefonica Innovation took on the challenge to future-proof the company’s global video platform business, we ran an experiment trialing two different types of video diets as part of our research. We did this to help us understand what video experiences within their content diets mattered more than others, but also to help us figure out the optimum conditions for achieving balance in an age of AI-powered, bottomless streams. We began by subjecting our participants to a 2-day video detox, as a means to help them reflect on the nature of their consumption, and to reset their systems. Then, one group was given an allowance of one hour a day (vs the global average of 7). The other group was allowed to maintain their current diet — however, they were also given a rudimentary index akin to ones found in the world of nutrition, through which they were asked to assess their current video repertoire (listing what they watch under the categories ’nutritious’, ‘delicious’ or ‘junk’).
For one, this experiment revealed that social connections, physical movement and productivity suffered dramatically against a backdrop of video abundance and fragmentation. Moreover, only once it was taken away did people question the time they devoted to consuming content every day. Crucially, we also learned that the optimal way to help facilitate balance is — first and foremost — through better creation and curation that respects their attention (more nutritious, some delicious, less junk).
3. Upskill: Affective Computing x VPA’s
A couple of years ago, we ran a project on affective computing for our VPA division in order to determine what it will take for people to trust emotionally aware machines with their data and decisions. Ahead of asking them to workshop any services powered by it, it was essential to build our participants’ knowledge of the technology and its capabilities. To do this, we invited them into a physical ‘emotion-AI app store’, containing provocative concepts that showcased what the technology could do. Having just emerged from the ‘experiment’ phase that entailed tracking their emotions for a week, we now asked them to ‘spend’ this new currency on what they deemed to be a worthy value exchange. For starters, it transpired that people are excited about this technology’s potential. We learned that sharing emotion-related data in exchange for positive mood shifts and emotional wellbeing were particularly welcome — while having it utilised in ecommerce and advertising experiences less so. However, with regards to the preferred value exchange, the terms would entail that any mechanisms around collection, storage, and usage were radically overhauled so as to maximise the protection of what they perceived to be the most sensitive, high-risk data set they will generate.
4. Workshop: AI x Security
In a workshop for a project that explored how we might resolve the tension between sharing data for convenience and the risks this poses to people’s identity, we gave our participants (who had all been subject to data abuse of some sort) the challenge to figure it out through design.
One group came up with a tool they called Mr. Wolf, inspired by the Pulp Fiction character — a reliable assistant who cleans up your data mess and manages your permissions on your behalf, so that you don’t have to. Key here is the notion of autonomy — people don’t want to flick switches or assess what data traces they left where. What happens to their data is anything but easy to comprehend, which is why a trusted agent that looks out for their data was desired. The analogy of Mr. Wolf spoke volumes about how they expected it to behave, such as being on top of and sorting out their ‘data-mess’, exuding mastery of the issue at hand, asking no questions, and making no judgements. Beyond these being highly useful ingredients for a security innovation brief, they are equally as valuable for future design, brand and marcomms development.
We’ll stick with the theme of personal data for this final example. When the business asked us to help them understand how people really felt about sharing their data, we set out to quantify the attitudes we discovered in the field, using the language we picked up along the way.
We learned that just because there is value to be gained, it did not necessarily mean that the act of sharing of data was a positive experience. In the field, we often heard people talk about sharing data with services they valued ‘begrudgingly’ — it was seen as a necessary evil, so to speak. The choice of statements on the reciprocal survey had to be equally as nuanced, rather than a binary one between sharing and not sharing. It ultimately ended up being the most popular choice.
This led to a shift in mindset internally — we shouldn’t simply assume that gaining something of value in return is enough. Instead, we should consider how to make sharing data at least feel ok — by way of the right security mechanics, privacy options, degrees of control (e.g. tiered sharing, clearly delineating between what you get for what you give), and transparent explanations in the right places across all our data innovations.
Towards Innovation Strategy
In order to be effective in guiding innovation, human truths need to be aligned with a strong grasp of business needs, current and pending industry shifts, as well as deep knowledge of technological possibility. Couched in this level of rigour ensures they can credibly give way to strategy that guides relevant, responsible and original innovation.
Inside Telefonica Innovation, we have consistently benefitted from exploring and developing an understanding of humans, culture and society at the start of our own innovation processes. Practically, when it comes to the impact of this kind of work, the opportunity yield is vast and positively unexpected. Additionally, there is a lot of so-called ‘soft power’ to be gained. Internally, collective mindsets — including those of our project stakeholders — are pointed at the long tail of meeting deep-held needs, honestly and openly. Empathy starts long before design demands it in its category-specific, narrow ways.
We can’t claim the nature of upstream research projects can fully mitigate the risks posed by a service or product once released into the wild, into the hands of humans with varied ideologies, technological / digital literacy and values. At best, it can lessen it, but as Charlie Brooker once said about what lies at the heart of his show, Black Mirror: “it’s not a technological problem we have, it’s a human one”. Furthermore, more organisational change is needed for a socio-cultural footprint to be neutral or purely positive in its essence.
However, for now, on your path towards responsible innovation, consider adopting or adapting a method like A Human Core, to help brands:
innovation strategy, design and business development in human truths
empathy amongst a diverse innovation cast and stakeholders to overcome their biases
& mitigate potential / unintended consequences early