Community

Recap: AI Hangout: What Information Should and Shouldn’t Be Shared with AI


5f7d8e16b60a2.png



The discussion quickly centered on the tension between the utility of AI/LLMs and the critical need for data protection, particularly client and private information.



Key Takeaways & Perspectives:





  • Security Over Convenience: Several participants, particularly those serving clients in highly regulated markets (e.g., Germany), emphasized a Fort Knox approach to data. This involves intentionally limiting the input of any personal or private client data into public LLMs like ChatGPT or Claude, relying instead on high-level, generalized prompts for brainstorming or strategy.







  • The “Lethal Trifecta” Concern: A security risk was raised concerning the potential for an LLM, even one integrated via a connector with “super admin” privileges, to go “off the rails” and corrupt or exfiltrate sensitive CRM data. Mistrust of connectors was a recurring theme, with many opting not to link public LLMs to live HubSpot portals due to this security risk.







  • Advanced, Safe AI Use Cases: The call included examples of sophisticated, low-risk AI adoption:




    • Structured Note-Taking: Using AI note-takers (like AskElephant) with custom-sculpted prompts and workflows to generate structured, tailored meeting notes. These are fed to HubSpot in a controlled way, minimizing data exposure.




    • Automated Process Documentation: Using AI to generate step-by-step process documents and visual flowcharts from recorded client demonstrations. This non-sensitive, domain-specific knowledge is then placed in Breeze Assistant knowledge vaults in HubSpot, offering clients an internal, safe, on-demand reference tool.









  • The Problem of Over-Confidence: There was consensus that many users dangerously overestimate AI’s competence and accuracy outside of limited, defined tasks. This leads to risky behavior like feeding real client names or sensitive data to general-purpose LLMs without realizing the failure modes or data retention risks.







  • Prompt Engineering & Fact-Checking: Strategies discussed for getting better results while minimizing risk include:




    • Treating the LLM as a “site thinker” or advisor to evaluate an existing strategic approach.




    • Asking the AI to strictly base results “on the facts that you know” and not “create something that is not true.”




    • Documenting the AI process (as one participant does via Google Docs) to maintain a complete historical follow-up, similar to an audit trail.







Loved these insights? We’re excited to bring you more opportunities to connect, learn, and discuss the future of AI. Join our community to be the first to know about upcoming events!

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *