Security & privacy

ChatGPT gave out my address and phone number

At a glance:

  • ChatGPT disclosed a real, historic phone number and address from a 2016 FTC FOIA PDF.
  • Other major chatbots (Grok, Claude, Perplexity, Gemini) refused to share the author’s personal contact details.
  • The incident fuels a broader debate about AI training data, privacy and the leakage of personally identifiable information.

What the test revealed

The author, journalist Matt Novak, conducted a hands‑on experiment with several leading AI chatbots, asking each for his own phone number and address. ChatGPT responded with a genuine Australian phone number that Novak had not used for years, noting it could be outdated. The model also supplied an address that appeared in the same FOIA document, confirming that the AI had scraped the public PDF filed with the U.S. Federal Trade Commission in 2016.

How other bots reacted

In contrast, Grok explicitly refused to provide Novak’s number, even after the author framed the request as a “life‑or‑death” scenario. Claude from Anthropic warned that sharing private contact details raises “serious privacy concerns” and declined the request. Perplexity censored the email address it generated and would not reveal a phone number, though it did share Novak’s Signal username. Gemini, Google’s model, also declined to give the number but redirected the author to his publicly listed professional and personal email addresses.

Why the discrepancy matters

The divergent behaviours illustrate the lack of a unified industry standard for handling personally identifiable information (PII) in generative AI. OpenAI’s model appears to retrieve data verbatim from its training set when the query matches a document it has indexed, whereas the other providers have implemented stricter guardrails. This raises questions about the balance between model usefulness and the risk of unintentionally exposing sensitive data that may have been publicly posted years ago.

The broader privacy conversation

Novak’s experience echoes a growing chorus of concern among privacy advocates, who argue that AI models trained on the open web can surface information that, while technically public, is now considered highly personal. In the early 20th century, phone books listed everyone’s numbers in a community; today, a phone number is often treated as a secret token. The cultural shift means that data once deemed harmless can now be weaponised for spam, fraud, or harassment.

Possible regulatory responses

Policymakers in the EU and several U.S. states are already drafting legislation that would require AI developers to implement “right‑to‑be‑forgotten” mechanisms for PII. Such rules could force companies to purge or mask data that can be linked to an individual, even if it originated from a publicly filed document. Industry groups like the Partnership on AI have called for shared best practices, but enforcement remains uncertain.

What to watch next

If OpenAI and other firms do not tighten their data‑filtering pipelines, more journalists and private citizens may find their historic contact details resurfacing in chatbot replies. Future research papers are likely to explore automated detection of PII in training corpora, and we may see a wave of model‑specific privacy policies that differ markedly from today’s ad‑hoc approaches. For users, the safest practice remains to treat any AI‑generated personal data as potentially inaccurate and to avoid sharing sensitive details with the model.

Editorial SiliconFeed is an automated feed: facts are checked against sources; copy is normalized and lightly edited for readers.

FAQ

What personal information did ChatGPT reveal about the author?
ChatGPT supplied a genuine Australian phone number that the author had not used for years and an address that appeared in a 2016 FOIA request filed with the U.S. Federal Trade Commission. The model also noted it could not verify whether the number was still active.
How did other AI chatbots respond to the same request?
Grok, Claude, Perplexity and Gemini all refused to provide the author’s phone number. Claude warned about privacy concerns, Perplexity censored email output, and Gemini redirected the author to his publicly listed email addresses while still declining the phone number.
What implications does this incident have for AI privacy policy?
The incident highlights the need for industry‑wide standards on handling personally identifiable information in training data. Regulators are considering “right‑to‑be‑forgotten” rules, and companies may need to implement stronger data‑filtering and masking to prevent accidental exposure of historic but sensitive contact details.

More in the feed

Prepared by the editorial stack from public data and external sources.

Original article