You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I hope this message finds you well. I’d like to propose a new approach to enhance AI learning while maintaining high-quality knowledge standards.
Currently, AI models do not retain knowledge from individual user interactions due to privacy and quality concerns. However, a community-validated learning mechanism could allow AI to evolve dynamically without compromising accuracy or security.
Proposed Idea:
Users can opt-in to contribute their conversations to improve the AI knowledge base.
Newly acquired knowledge goes through a community validation system (e.g., upvotes/downvotes or expert moderation).
Only highly validated insights are incorporated into the model’s training data in future updates.
Benefits:
✅ Faster AI learning and adaptation ✅ High-quality knowledge filtering through community validation ✅ Increased user engagement in shaping AI’s development
By leveraging crowdsourced validation, OpenAI could develop a more adaptive AI without sacrificing accuracy or user trust. I’d love to hear your thoughts on this idea and whether such an approach could be explored further.
Looking forward to your feedback.
Best regards,
Péter SESZTÁK
The text was updated successfully, but these errors were encountered:
"additional safeguards, such as a system that gives more weight to contributions from verified experts or long-standing community members": clever idea -it's all about weight factors :) !
The basic, but otherwise usable implementation doesn't seem complicated, the question is: how open is OpenAI to this? Specifically, to expanding the knowledge base, to opening it up: by expanding it with many, many, many atomic small knowledge bases. I don't know OpenAI's policy to train/validate/release a new model - but I suspect they've done this in large, batch mode in the past and I don't know how open and prepared they are for this completely new approach. What do you think?
Dear OpenAI Team,
I hope this message finds you well. I’d like to propose a new approach to enhance AI learning while maintaining high-quality knowledge standards.
Currently, AI models do not retain knowledge from individual user interactions due to privacy and quality concerns. However, a community-validated learning mechanism could allow AI to evolve dynamically without compromising accuracy or security.
Proposed Idea:
Benefits:
✅ Faster AI learning and adaptation ✅ High-quality knowledge filtering through community validation ✅ Increased user engagement in shaping AI’s development
By leveraging crowdsourced validation, OpenAI could develop a more adaptive AI without sacrificing accuracy or user trust. I’d love to hear your thoughts on this idea and whether such an approach could be explored further.
Looking forward to your feedback.
Best regards,
Péter SESZTÁK
The text was updated successfully, but these errors were encountered: