fbpx

TECH INTELLIGENCE: The dark side

Carl Mazzanti//April 24, 2023

Artificial intelligence
Artificial intelligence

TECH INTELLIGENCE: The dark side

Carl Mazzanti//April 24, 2023

ChatGPT, a language model developed by OpenAI, is a powerful tool that lets companies analyze and generate answers to questions and perform a variety of functions. But while ChatGPT and other AI tools provide significant benefits to businesses, they may also present a security risk, since these tools may be used to access sensitive data.

For example, a business can ask an AI tool – armed with crawled, seemingly unrelated data – to generate a list of customer complaints lodged against a specific company over a time period, say the last six months. With that data in hand, the business could go after a competitor’s customers or model a campaign to take them out. Forward-thinking business owners, however, can partner with an experienced cybersecurity services firm to keep their information safe from prying AI.

So-called “data scraping” is one of the most direct ways a competitor’s information can be uncovered. This threat vector involves using an AI, like ChatGPT, to analyze the structure of a website or other online platform and generate automated scripts that will extract data from it, such as product information, product descriptions, pricing and availability. This can provide competitors with valuable insights into a competitor’s product offerings, pricing strategies, staffing, skills, upcoming announcements and more.

AI tools powered by and like ChatGPT can also “scrape” a competitor’s website for data on customer behavior, including the pages they visit, the products they purchase, and the amount of time they spend on the site, providing insights into the competitor’s customer base and purchasing habits. Finally, these AI tools can also scrape a competitor’s online advertising, such as copy and targeting information, to provide insights into their advertising strategies and the keywords they are using.

ChatGPT and other AI tools can also be deployed to spy on competitors indirectly like through social engineering. Using this tactic, ChatGPT could create convincing phishing emails or social media messages that trick the target’s employees into giving up sensitive information. Alternatively, an AI tool can scrape a competitor’s social media channels for data on customer sentiment and engagement, yielding insights into their competitor’s brand reputation and customer engagement strategies.

ChatGPT can be used to analyze the language used in a competitor’s online communications, such as marketing materials, social media posts, and customer reviews, providing valuable insights into a competitor’s brand messaging, customer sentiment, and overall strategy. ChatGPT can also be used to create predictive models based on data from a competitor’s website or other sources, which can be used to forecast customer behavior or identify emerging trends in a competitor’s market. Successfully executed, these and other under-the-radar threats can give competitors a significant advantage.

Constructing a wall

Businesses that want to guard against these kinds of unauthorized intrusions into their own organization should consult a cybersecurity solutions provider, who can highlight layered defenses that may mitigate this kind of threat. One effective starting point involves limiting access to sensitive information. This may be done by implementing access controls and permissions for all or selected files and folders. Such a limitation can be done by adding a rule to a site’s robots.txt file, which alerts search engine crawlers to the URLs that a crawler can access on a site.

Or, a business can barricade against access to specific directories, like your company’s blog:

There are additional ways to guardrail a company’s data, including:

  • Encryption. Even if ChatGPT can access encrypted files, the information will be unreadable without the proper decryption key.
  • Multifactor authentication. MFA is an additional layer of security, which requires multiple forms of authentication – such as a password and a security token – before access is granted. This tactic may make it more difficult for ChatGPT to gain access to sensitive information.
  • Data loss prevention. DLP software identifies and prevents the unauthorized transfer or use of sensitive data. It may also assist in preventing ChatGPT from accessing sensitive information and transmitting it to an unauthorized source.
  • Employee training. One of the leading causes of data breaches is human error. So providing employees with training on best practices for file security, making them aware of the potential risks posed by ChatGPT, educating employees on how to recognize and prevent data breaches, and periodically testing them may further reduce the risk of ChatGPT accessing sensitive information.
  • Monitor network activity. Businesses can also monitor their network activity for suspicious behavior, including unusual access patterns or attempts to access sensitive information. Adding this defense layer may enable a company to detect and respond to potential threats before they become a major problem.

 

Leading-edge tech companies are pouring huge sums of money into ChatGPT and other forms of AI. The benefits will be enormous, but the threats posed by AI will also continue to grow. Companies can maximize the benefit and minimize their exposure, however, by taking decisive action early on and following it up with ongoing monitoring and defensive development.

Carl Mazzanti is president of eMazzanti Technologies in Hoboken.

e