This message was deleted.
# general
s
This message was deleted.
s
I can't speak for lots of organizations, but we're approaching things experimentally, assessing specific tools rather than having a general policy. It's far easier to decide whether Grammarly or Copilot are useful than it is to make a policy at the "AI" level, if that makes sense?
j
@Steve Fenton but then do you create a policy or guidelines per tool? — even Tool A is good for this but Tool B is good for this specific thing but don’t use it for that — but don’t put customer data into any of it?
s
Yes - that's more like what we're doing. We take a look at how data is used before we try out the tools and create guidance on what kind of data you can use (for example in prompts) - then we do a trial and see how everyone feels about the tool. For most of the tools that "made it", they have a great story around the data and privacy. This is a great example of what good looks like: https://www.grammarly.com/privacy We're also taking a deliberate path of avoiding the allure of "volume". Yes, we could get "more things" by pumping them out, but we're more interested in creating fewer things at higher quality. That makes some of the options (ChatGPT) less appealing to me.