Leaking industry secrets is a much bigger concern that boosting productivity a little bit.
We’re talking about very specialized engineering work, it’s not something you can totally rely on a bot to do, though it might help sometimes, it’s fully understandable for specialized companies to want to ban GPT internally, until there’s a way for them to host a totally internal one.
I don’t think being a customer would work either, language models are still on the training, noone knows exactly how users queries are used, that’s a big no no for every company having to protect their secrets.
A self-hosted instance is a much better solution, if not the only “safe” one from that point of view, we’ll get there.
Leaking industry secrets is a much bigger concern that boosting productivity a little bit.
We’re talking about very specialized engineering work, it’s not something you can totally rely on a bot to do, though it might help sometimes, it’s fully understandable for specialized companies to want to ban GPT internally, until there’s a way for them to host a totally internal one.
On this I agree entirely. The potential for corporate espionage because of unwitting employees using an LLM through unofficial means is huge.
At the very least, the corporation itself would have to be the customer, so that watertight terms might be negotiated, not the employee.
I don’t think being a customer would work either, language models are still on the training, noone knows exactly how users queries are used, that’s a big no no for every company having to protect their secrets.
A self-hosted instance is a much better solution, if not the only “safe” one from that point of view, we’ll get there.