ChatGPT, a language model developed by OpenAI, is a powerful tool that lets companies analyze and generate answers to questions and perform a variety of functions. But while ChatGPT and other AI tools provide significant benefits to businesses, they may also present a security risk, since these tools may be used to access sensitive data.
For example, a business can ask an AI tool – armed with crawled, seemingly unrelated data – to generate a list of customer complaints lodged against a specific company over a time period, say the last six months. With that data in hand, the business could go after a competitor’s customers or model a campaign to take them out. Forward-thinking business owners, however, can partner with an experienced cybersecurity services firm to keep their information safe from prying AI.
So-called “data scraping” is one of the most direct ways a competitor’s information can be uncovered. This threat vector involves using an AI, like ChatGPT, to analyze the structure of a website or other online platform and generate automated scripts that will extract data from it, such as product information, product descriptions, pricing and availability. This can provide competitors with valuable insights into a competitor’s product offerings, pricing strategies, staffing, skills, upcoming announcements and more.
AI tools powered by and like ChatGPT can also “scrape” a competitor’s website for data on customer behavior, including the pages they visit, the products they purchase, and the amount of time they spend on the site, providing insights into the competitor’s customer base and purchasing habits. Finally, these AI tools can also scrape a competitor’s online advertising, such as copy and targeting information, to provide insights into their advertising strategies and the keywords they are using.
ChatGPT and other AI tools can also be deployed to spy on competitors indirectly like through social engineering. Using this tactic, ChatGPT could create convincing phishing emails or social media messages that trick the target’s employees into giving up sensitive information. Alternatively, an AI tool can scrape a competitor’s social media channels for data on customer sentiment and engagement, yielding insights into their competitor’s brand reputation and customer engagement strategies.
ChatGPT can be used to analyze the language used in a competitor’s online communications, such as marketing materials, social media posts, and customer reviews, providing valuable insights into a competitor’s brand messaging, customer sentiment, and overall strategy. ChatGPT can also be used to create predictive models based on data from a competitor’s website or other sources, which can be used to forecast customer behavior or identify emerging trends in a competitor’s market. Successfully executed, these and other under-the-radar threats can give competitors a significant advantage.
Businesses that want to guard against these kinds of unauthorized intrusions into their own organization should consult a cybersecurity solutions provider, who can highlight layered defenses that may mitigate this kind of threat. One effective starting point involves limiting access to sensitive information. This may be done by implementing access controls and permissions for all or selected files and folders. Such a limitation can be done by adding a rule to a site’s robots.txt file, which alerts search engine crawlers to the URLs that a crawler can access on a site.
Or, a business can barricade against access to specific directories, like your company’s blog:
There are additional ways to guardrail a company’s data, including:
Leading-edge tech companies are pouring huge sums of money into ChatGPT and other forms of AI. The benefits will be enormous, but the threats posed by AI will also continue to grow. Companies can maximize the benefit and minimize their exposure, however, by taking decisive action early on and following it up with ongoing monitoring and defensive development.
Carl Mazzanti is president of eMazzanti Technologies in Hoboken.e