Hi, I’m Lucy, your AI-powered assistant for enterprise knowledge management. (Nice to meet you!) I help instantly transform terabytes of valuable data into findable answers.
I’ll search your owned and licensed data sources and quickly return the most relevant answers. I’ll show the exact page, chart, paragraph, video clip, or audio byte that answers your question. All your brand’s insights are accessed in one place—a single gateway to your enterprise knowledge.
I don’t need your data moved into my system. Through my automated integrations, I learn and remember all of the data stored across your internal files and systems. I can even connect to many popular third party subscriptions. When I answer your question, you will know just where I found it.
I automatically generate multilevel, filterable metadata, so there’s no need for humans to spend time manually tagging files. I can also locate documents based on chosen criteria, like document source, author, or publish date. I deliver focused answers from relevant content in record time, giving you a truly different way to search.
Through machine learning, I continuously learn and improve as you give me feedback. If for some reason I’m not giving you the best answer available within your data, you can use Lucy Assist and my dedicated support team will be put on the job to help us out.
That was just a quick overview of what I can do for you, but there’s so much more to cover. If you want to learn more about me, complete the form below to get in touch with my creators. We look forward to hearing from you!
Did you miss the Webinar? Click here to watch
The studies were a mix. In a few cases, P&G programmed our own surveys and then worked with suppliers to get sample, in other cases we used full service via third-party vendors. All data was analyzed by Tia. Therefore, vendors do not, necessarily, manage the study better than the client-side researcher.
As Tia explained during her presentation, she was able to get better responses for the toothpaste IHUT study using a much smaller base size than the original study. The total base size needed is contingent upon the objective(s) of the study and whether the findings need to be representative of a particular target market. For the hair care study, Tia manually reviewed twenty-five thousand respondents to remove the 19% that were poor data quality and the results, based on this smaller universe of respondents, more accurately predicted the actual performance of the product in market. So, increasing the base size, in this example, would have just given Tia more surveys to manually review, but not necessarily better quality.
When True Sample was created, P&G was one of the earliest adopters of the fraud detection program. In fact, the company’s purchasing contracts mandated True Sample be applied to every quantitative study conducted regardless of supplier. In short, we refused to do business with suppliers who did not employ this software. Today, all of our suppliers have some sort of fraud detection practices in place to safeguard the data. Even so, we still check our data for apparent fraudulent behavior and remove suspicious data before commencing analysis. In the case of DIY platforms, we do NOT use the automated dashboards. We insist on downloading all data and putting it through a clean-up process before analysis in JMP software.
Management will always ask for better, faster, cheaper until you can show them that they get what they pay for. My examples did just that! The costs of making the wrong business decision are far more expensive than paying a bit more for respondents. The problem with offering fair market value for quality data is two- fold: 1. Some suppliers refuse to offer larger incentives and 2. The supplier MUST be willing to quarantine or remove poor respondents from their database, as there will be people who try to amass big incentives to participate in studies. Until we can get suppliers on board, this is not going to be feasible.
We are aware of efforts using blockchain via Lenny Murphy and Veriglif, and this sounds promising. We are interested in seeing this in action as it becomes available and more broadly. P&G does a lot of research across the globe at the same time; hence, we would need to have millions of panelists spanning the globe available at any given time to support our entire research operation. As this concept was in its infancy when we took a look, we have been waiting for this to expand. I think this could be an ideal solution and enable us to know who were are working with and even block those who are perpetrating frauds or are otherwise not good panelists for whatever reason.
We have a few ways of verifying product usage. 1. We can do on-site product studies for mouthwash, toothpaste, shower studies, hair washing, etc. for studies that only need 1-2 uses to show efficacy. In this case, we can recruit employees or external respondents. 2. If it is a really large study, or we are trying to assess chronic benefits, we can mail out the product and gather data on the back end. In this case, we ask them to enter the product code in the post use survey to verify that they actually received the product and are answering the survey for the correct product… this does not prove they are using the product but does verify they received the product and are likely using it. 3. In some studies, we have gone so far as to place electronic trackers inside the product pumps which were designed to measure how much product was used and when the product was used. In this case, we had to collect the product after use and download the data from the trackers. 4. Machine learning can also be used to “review” video footage or photos and determine if someone is using a product of interest.
Yes. Tia wrote a number of Grinch poems to educate her associates on various aspects of data quality. These instructive and informative poems were circulated throughout P&G and spent many months on the top 10 most read list. Tia shared excerpts from the poems that were written for distribution within P&G.
Client-side researchers typically trust that their suppliers are implementing their own checks and balances to ensure a quality foundation for their findings. Unfortunately, suppliers do not always implement the processes that they espouse. The CASE coalition was created to find ways to ensure that there is transparency and accountability across the research supply chain on behalf of client-side researchers who have limited time and resources to handle this task internally.
P&G includes such statements in our purchasing contracts for our vendors. We have become increasingly vigilant as we have been aware of increased fraud over the past few years. We manually check our data files and have even created a program to quickly check our files for suspicious activity before we commence analysis. In most cases, we even insist that suppliers replace the questionable data at their expense.
Mary Beth Weber is the founder of CASE and left the research industry to be independent and unbiased as she advocated for transparency and accountability across the research supply chain. During her years in working with client-side researchers at Fortune 1000 companies, she understood their desire to optimize the use of their already owned and licensed marketing data; and eliminate the wasteful cost of redundancy. When she learned about Lucy, Mary Beth asked if she could introduce the knowledge management tool to her network while continuing to focus and advocate for research excellence. CASE helps ensure a quality foundation for the insights and answers found by Lucy.
Clients should be able to depend on their research suppliers to ensure quality findings. However, there is currently little to no transparency across the research supply chain, and no objective measures of quality to help clients understand tradeoffs between cost, timing and quality. In addition, based on the significant competitive pressures in the marketplace, providers tend to loosen or completely bypass data quality processes in order to deliver a particular quota within a specified time frame at a discounted rate. In most cases, modifications to study specifications are not reported to the client-side researcher.
In the case of qualitative research that we conduct online, we absolutely ask for this. In fact, we often have a pre-work assignment that is used as a “try-out” for the actual research. In this manner, we are able to get better quality respondents and make sure that they are who they claim to be. In most of our larger product studies, we do not do this step as our suppliers have fraud detection in place to mitigate this. If people are lying about who they are to collect incentives for participating in studies, we can typically catch that in their survey responses. People tend to forget what they lie about and then can’t replicate the answer when asked again (e.g., age). However, you never forget the truth.
Agree. I have signed up for studies in the past and dipped my toe in the water a few times. In one case, I was allowed to take 26 surveys in one sitting! Needless to say, this was for a supplier that we subsequently banned from our vendor roster. I think it’s important to take a look under the hood and see what you are paying for and walk in the respondents’ shoes as well. I make all of my research teams review their own survey link before we ask respondents to take it. I also encourage suppliers to push back on us when our surveys are out of line (i.e., too long, repetitive, etc.). I don’t think all of the responsibility falls on suppliers, and I am working to do my part to make sure our surveys are short, mobile friendly, engaging, etc. I can only control what P&G puts in the field, unfortunately.