Skip to content

[FEATURE] Feature Request: User-selectable context linking between conversations in ChatGPT #2219

@NordanViking

Description

@NordanViking

🎯 Background – The problem today

Many ChatGPT users work on long-term topics:
• Ongoing technical discussions
• Creative projects
• Research notes
• Personal planning
• Repeated thematic conversations

But currently there is no way to link, re-use or activate context from previous conversations.

Each chat thread is isolated, which leads to:
• Re-explaining the same things over and over
• Losing long-term continuity
• Token and GPU waste
• Reduced usability for serious projects
• Frustration for both users and the system


✅ Proposed solution: User-controlled context activation

Add a feature that allows the user to select which past conversation threads should be active in the current chat.

Two possible interaction styles:

  1. Command-based

“Include the heat pump discussion from September 27.”
“Link the thread where we talked about freedom of speech.”

  1. Visual checkbox selector

Example:

[✓] Game rules project – Oct 10
[ ] Music discussion – Sep 29
[✓] Energy system planning – Oct 1
[ ] Genealogy thread – Aug 22

Only selected threads get loaded into context.

This is not permanent memory – it is temporary, user-chosen context.


🧠 Example use case

A user has 8 deep chats over months:

• Heat pump installation
• Role-playing game rules
• Personal health research
• Long-term philosophical topics
• Coding experiments
• AI art prompts

Instead of summarizing again, the user simply activates the threads they want, and continues instantly.


📌 Benefits for users

✅ Can build long-term projects without repetition
✅ No more “start from zero” effect
✅ User has full control over what the model should remember
✅ No accidental context-bleed from unrelated chats
✅ Saves time, energy and frustration


⚙️ Benefits for OpenAI / system side

✅ Reduces token usage (only relevant threads loaded)
✅ Reduces GPU cost per answer
✅ Less unnecessary context-processing
✅ Fewer hallucinated memories
✅ Clearer privacy model (explicit user-consent)
✅ Higher trust and usability for serious work

Even a 10–15% average token reduction per conversation scale-wide = massive cost savings.


🔄 Summary

This feature would:

✔️ Make ChatGPT scalable for long-term real projects
✔️ Allow intentional, selective memory without risk
✔️ Reduce wasted tokens and compute
✔️ Improve output accuracy
✔️ Create a new “middle layer” between full memory and no memory at all


📎 Possible feature names

• Selectable Memory
• Linked Threads
• Context Picker
• Memory Switchboard
• Manual Context Linking
• “Use Conversation X in this Chat”


🖊️ Proposed by

GitHub user: NordanViking
Alias: NordanViking


Thanks for reading – I hope this idea is useful for both users and OpenAI.


Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions