diff --git a/.gitignore b/.gitignore
index 508791b..8ddaed4 100644
--- a/.gitignore
+++ b/.gitignore
@@ -152,3 +152,6 @@ cython_debug/
/concord/bertopic_model.pkl
/.idea/rust.xml
/nltk_data/
+/concord/dataset_topic_messages.csv
+/topic_model
+/topic_visualization.html
diff --git a/.idea/runConfigurations/Concord.xml b/.idea/runConfigurations/Concord.xml
deleted file mode 100644
index cdbf4c3..0000000
--- a/.idea/runConfigurations/Concord.xml
+++ /dev/null
@@ -1,35 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/.idea/runConfigurations/Server.xml b/.idea/runConfigurations/Server.xml
new file mode 100644
index 0000000..1c91f80
--- /dev/null
+++ b/.idea/runConfigurations/Server.xml
@@ -0,0 +1,17 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/docs/db/schema.md b/docs/db/schema.md
index 9bf0d0a..8f2d05b 100644
--- a/docs/db/schema.md
+++ b/docs/db/schema.md
@@ -2,47 +2,60 @@
### Channel
-- **channel_id**: Unique identifier
-- **platform**: Platform (e.g., Telegram)
-- **name**: Name of the channel
-- **description**: Description of the channel
-- **created_at**: Creation date
-- **active_members_count**: Number of active members
-- **language**: Language of the channel
-- **region**: Geographical region
-- **activity_score**: Posting activity score, indicating channel activity level
+- **channel_id**: Unique identifier for the channel.
+- **name**: Name of the channel.
+- **description**: Brief description of the channel.
+- **created_at**: Timestamp indicating when the channel was created.
+- **language**: Language predominantly used in the channel.
+- **activity_score**: Numerical score representing the activity level in the channel.
+
+**Methods**:
+- `create_channel`: Creates a new channel with specified details.
+- `associate_with_topic`: Connects a topic to the channel, setting scores and trend.
+- `add_semantic_vector`: Adds a semantic vector to the channel.
---
### Topic
-- **topic_id**: Unique identifier
-- **name**: Summary of the topic
-- **keywords**: List of key terms with associated weights (e.g., `[{"term": "AI", "weight": 0.35}, {"term": "neural networks", "weight": 0.28}]`)
-- **bertopic_metadata**: BerTopic metadata
-- **topic_embedding: Topic embedding
-- **updated_at**: Last updated timestamp
+- **topic_id**: Unique identifier for the topic.
+- **name**: Summary name of the topic.
+- **keywords**: List of key terms and associated weights (e.g., `[{"term": "AI", "weight": 0.35}]`).
+- **bertopic_metadata**: Metadata from BerTopic processing.
+- **topic_embedding**: Vector embedding for the topic.
+- **updated_at**: Timestamp of the last update.
+
+**Methods**:
+- `create_topic`: Creates a new topic with specified keywords and metadata.
+- `relate_to_topic`: Relates this topic to another, setting similarity metrics.
+- `add_update`: Adds a topic update with score change and keywords.
+- `set_topic_embedding`: Sets the embedding vector for the topic.
+- `get_topic_embedding`: Retrieves the embedding as a numpy array.
---
### TopicUpdate
-- **update_id**: Unique identifier
-- **channel_id**: Associated channel
-- **topic_id**: Associated topic
-- **keywords**: Keywords from the update
-- **score_delta**: Change in topic score
-- **timestamp**: Update time
+- **update_id**: Unique identifier for the update.
+- **keywords**: Keywords associated with this update.
+- **score_delta**: Numerical change in the topic score.
+- **timestamp**: Time when the update was made.
+- **topic_embedding**: Vector embedding for the topic.
+
+**Methods**:
+- `create_topic_update`: Creates a new update for a topic.
+- `link_to_channel`: Links this update to a channel.
---
### SemanticVector
-- **vector_id**: Unique identifier
-- **semantic_vector**: Aggregated representation of recent message semantics in a channel, preserving privacy by summarizing content instead of storing individual messages.
-- **created_at**: Creation date
+- **vector_id**: Unique identifier for the semantic vector.
+- **semantic_vector**: Aggregated vector summarizing recent message semantics.
+- **created_at**: Timestamp indicating creation.
-> **Explanation**: The SemanticVector node represents a general semantic profile of recent messages in a channel, supporting dynamic topic relevance without storing each message individually. This approach aligns with privacy requirements while allowing for the adjustment of topic relevance.
+**Methods**:
+- `create_semantic_vector`: Creates a new semantic vector.
---
@@ -50,20 +63,25 @@
### ASSOCIATED_WITH (Channel → Topic)
-- **topic_score**: Cumulative or weighted score representing a topic’s importance or relevance to the channel
-- **keywords_weights**: Channel-specific keywords and their weights, highlighting the unique relationship between the channel and topic
-- **message_count**: Number of messages analyzed in relation to the topic
-- **last_updated**: Timestamp of the last update
-- **trend**: Indicator of topic trend over time within the channel
-
-> **Explanation**: This relationship captures the importance of each topic to specific channels, with channel-specific keyword weights providing additional insight into unique topic-channel dynamics. `trend` enables tracking how each topic's relevance changes over time within the channel.
+- **topic_score**: Weighted score indicating a topic's relevance to the channel.
+- **last_updated**: Last time the relationship was updated.
+- **trend**: Trend indication for the topic within the channel.
---
### RELATED_TO (Topic ↔ Topic)
-- **similarity_score**: Degree of similarity between two topics
-- **temporal_similarity**: Metric to track similarity over time
-- **co-occurrence_rate**: Frequency of concurrent discussion of topics across channels
-- **common_channels**: Number of shared channels discussing both topics
-- **topic_trend_similarity**: Measure of similarity in topic trends across channels
+- **similarity_score**: Similarity metric between topics.
+- **temporal_similarity**: Measure of similarity persistence over time.
+- **co_occurrence_rate**: Rate of joint appearance in discussions.
+- **common_channels**: Count of shared channels discussing both topics.
+- **topic_trend_similarity**: Trend similarity between topics across channels.
+
+---
+
+### HasRel (General Relationship)
+
+This relationship can be used as a generic placeholder for relationships that do not have specific attributes.
+
+> **Note**: The relationships provide both dynamic and static metrics, such as similarity scores and temporal similarity, enabling analytical insights into evolving topic relationships.
+
diff --git a/src/bert/concord.py b/src/bert/concord.py
index 2de4bc1..36bda4d 100644
--- a/src/bert/concord.py
+++ b/src/bert/concord.py
@@ -1,47 +1,72 @@
# concord.py
-
from bert.pre_process import preprocess_documents
-from graph.schema import Topic
+from graph.schema import Topic, Channel, Platform
-def concord(
- topic_model,
- documents,
-):
- # Load the dataset and limit to 100 documents
- print(f"Loaded {len(documents)} documents.")
+def concord(bert_topic, channel_id, platform_id, documents):
+ platform, channel = platform_channel_handler(channel_id, platform_id)
- # Preprocess the documents
+ # Load and preprocess documents
+ print(f"Loaded {len(documents)} documents.")
print("Preprocessing documents...")
documents = preprocess_documents(documents)
- # Fit the model on the documents
+ # Fit the topic model
print("Fitting the BERTopic model...")
- topics, probs = topic_model.fit_transform(documents)
+ bert_topic.fit(documents)
+ topic_info = bert_topic.get_topic_info()
- # Get topic information
- topic_info = topic_model.get_topic_info()
-
- # Print the main topics with importance scores
+ # Log main topics
print("\nMain Topics with Word Importance Scores:")
for index, row in topic_info.iterrows():
topic_id = row['Topic']
if topic_id == -1:
continue # Skip outliers
topic_freq = row['Count']
- topic_words = topic_model.get_topic(topic_id)
+ topic_words = bert_topic.get_topic(topic_id)
- # Prepare a list of formatted word-score pairs
- word_score_list = [
- f"{word} ({score:.4f})" for word, score in topic_words
- ]
+ # Create a list of word-score pairs
+ word_score_list = [{
+ "term": word,
+ "weight": score
+ } for word, score in topic_words]
- # Join the pairs into a single string
- word_score_str = ', '.join(word_score_list)
+ # Create or update a Topic node
+ topic = Topic.create_topic(name=f"Topic {topic_id}",
+ keywords=word_score_list,
+ bertopic_metadata={
+ "frequency": topic_freq
+ }).save()
+ topic.set_topic_embedding(bert_topic.topic_embeddings_[topic_id])
+ channel.associate_with_topic(topic, channel_score=0.5, trend="")
- # Print the topic info and the word-score string
print(f"\nTopic {topic_id} (Frequency: {topic_freq}):")
- print(f" {word_score_str}")
+ print(
+ f" {', '.join([f'{word} ({score:.4f})' for word, score in topic_words])}"
+ )
+
+ print("\nTopic modeling and channel update completed.")
+ return len(documents), None
+
- print("\nTopic modeling completed.")
- return len(documents), Topic.create_topic()
+def platform_channel_handler(channel_id, platform_id):
+ platform = Platform.nodes.get_or_none(platform_id=platform_id)
+ if not platform:
+ print(
+ f"Platform with ID '{platform_id}' not found. Creating new platform..."
+ )
+ platform = Platform(platform_id=platform_id).save()
+ channel = Channel.nodes.get_or_none(channel_id=channel_id)
+ if not channel:
+ print(
+ f"Channel with ID '{channel_id}' not found. Creating new channel..."
+ )
+ channel = Channel.create_channel(
+ channel_id=channel_id,
+ name=f"Channel {channel_id}",
+ description="",
+ language="English",
+ activity_score=0.0,
+ ).save()
+ platform.channels.connect(channel)
+ return platform, channel
diff --git a/src/bert/topic_update.py b/src/bert/topic_update.py
new file mode 100644
index 0000000..5281f49
--- /dev/null
+++ b/src/bert/topic_update.py
@@ -0,0 +1,98 @@
+# topic_update.py
+from sklearn.metrics.pairwise import cosine_similarity
+from datetime import datetime
+from graph.schema import Topic, TopicUpdate, Channel
+
+SIMILARITY_THRESHOLD = 0.8
+AMPLIFY_INCREMENT = 0.1
+DIMINISH_DECREMENT = 0.05
+NEW_TOPIC_INITIAL_SCORE = 0.1
+
+
+def compute_cosine_similarity(vector_a, vector_b):
+ return cosine_similarity([vector_a], [vector_b])[0][0]
+
+
+def update_channel_topics(channel_topics, new_topics, channel_id):
+ initial_scores = {
+ topic.topic_id: topic.topic_score
+ for topic in channel_topics
+ }
+ topic_updates = []
+
+ for new_topic in new_topics:
+ print(
+ f"\nProcessing new topic: {new_topic['name']} with weight {new_topic['weight']:.4f}"
+ )
+ similarities = {
+ idx:
+ compute_cosine_similarity(new_topic['embedding'],
+ channel_topic.topic_embedding)
+ for idx, channel_topic in enumerate(channel_topics)
+ }
+ print("Similarity scores:", similarities)
+
+ topic_amplified = False
+ for idx, similarity in similarities.items():
+ if similarity >= SIMILARITY_THRESHOLD:
+ channel_topic = channel_topics[idx]
+ original_score = channel_topic.topic_score
+ channel_topic.topic_score = min(
+ 1, channel_topic.topic_score + AMPLIFY_INCREMENT)
+ delta = channel_topic.topic_score - original_score
+ channel_topic.updated_at = datetime.utcnow()
+ channel_topic.save()
+ print(
+ f"Amplifying topic '{channel_topic.name}' from {original_score:.4f} to "
+ f"{channel_topic.topic_score:.4f} (delta = {delta:.4f})")
+
+ topic_update = TopicUpdate.create_topic_update(
+ keywords=channel_topic.keywords, score_delta=delta)
+ topic_update.topic.connect(channel_topic)
+ topic_updates.append(topic_update)
+
+ topic_amplified = True
+
+ if not topic_amplified:
+ print(
+ f"Creating new topic '{new_topic['name']}' with initial score {NEW_TOPIC_INITIAL_SCORE:.4f}"
+ )
+ topic_node = Topic(name=new_topic['name'],
+ topic_embedding=new_topic['embedding'],
+ topic_score=NEW_TOPIC_INITIAL_SCORE,
+ updated_at=datetime.utcnow()).save()
+ topic_node.add_update(new_topic.get('keywords', []),
+ NEW_TOPIC_INITIAL_SCORE)
+ Channel.nodes.get(channel_id=channel_id).associate_with_topic(
+ topic_node, NEW_TOPIC_INITIAL_SCORE,
+ new_topic.get('keywords', []), 1, 'New')
+ channel_topics.append(topic_node)
+
+ for channel_topic in channel_topics:
+ if channel_topic.name not in [nt['name'] for nt in new_topics]:
+ original_score = channel_topic.topic_score
+ channel_topic.topic_score = max(
+ 0, channel_topic.topic_score - DIMINISH_DECREMENT)
+ delta = original_score - channel_topic.topic_score
+ channel_topic.updated_at = datetime.utcnow()
+ channel_topic.save()
+ print(
+ f"Diminishing topic '{channel_topic.name}' from {original_score:.4f} to "
+ f"{channel_topic.topic_score:.4f} (delta = -{delta:.4f})")
+
+ if delta != 0:
+ topic_update = TopicUpdate.create_topic_update(
+ keywords=channel_topic.keywords, score_delta=-delta)
+ topic_update.topic.connect(channel_topic)
+ topic_updates.append(topic_update)
+
+ print("\nUpdated Channel Topics:")
+ print("{:<30} {:<15} {:<15}".format("Topic Name", "Initial Score",
+ "Updated Score"))
+ for topic in channel_topics:
+ initial_score = initial_scores.get(topic.topic_id,
+ NEW_TOPIC_INITIAL_SCORE)
+ print("{:<30} {:<15.4f} {:<15.4f}".format(topic.name, initial_score,
+ topic.topic_score))
+
+ return topic_updates
diff --git a/src/concord/server/app_lifespan.py b/src/concord/server/app_lifespan.py
index 2f9c162..d5f48dd 100644
--- a/src/concord/server/app_lifespan.py
+++ b/src/concord/server/app_lifespan.py
@@ -1,13 +1,15 @@
# app_lifespan.py
from contextlib import asynccontextmanager
from fastapi import FastAPI
-
from bert.model_manager import ModelManager
+from graph.graph import initialize_neo4j
@asynccontextmanager
async def lifespan(app: FastAPI):
- # Startup: Initialize the BERTopic model
+ # Startup: Initialize the BERTopic model and Neo4j connection
ModelManager.initialize_model()
+ initialize_neo4j() # Initialize Neo4j connection
+
yield
- # Shutdown: Perform any necessary cleanup here
+ # Shutdown logic here (if needed)
diff --git a/src/graph/graph.py b/src/graph/graph.py
index fa09ccf..21e733f 100644
--- a/src/graph/graph.py
+++ b/src/graph/graph.py
@@ -1,37 +1,14 @@
# graph.py
import os
+from neomodel import config
-from neo4j import GraphDatabase
+# Get the connection details from environment variables
+DATABASE_URL = os.getenv("DATABASE_URL",
+ "localhost:7687") # No `bolt://` prefix here
+NEO4J_USER = os.getenv("NEO4J_USER", "neo4j")
+NEO4J_PASSWORD = os.getenv("NEO4J_PASSWORD", "dev-password")
-# Initialize Neo4j driver
def initialize_neo4j():
- # Get uri username and password from ENV
- user = os.environ.get("NEO4J_USERNAME")
- password = os.environ.get("NEO4J_PASSWORD")
- uri = os.environ.get("NEO4J_URI")
-
- return GraphDatabase.driver(uri, auth=(user, password))
-
-
-# Function to store topics in Neo4j
-def store_topics_in_neo4j(model, batch_num):
- """
- Store topics and their relationships in Neo4j.
- """
- driver = initialize_neo4j()
- with driver.session() as session:
- topics = model.get_topics()
- for topic_num, words in topics.items():
- if topic_num == -1:
- continue # -1 is usually the outlier/noise topic
- # Create Topic node
- session.run(
- "MERGE (t:Topic {id: $id}) "
- "SET t.keywords = $keywords, t.batch = $batch",
- id=topic_num,
- keywords=words,
- batch=batch_num,
- )
- driver.close()
- print("Topics stored in Neo4j.")
+ # Add the 'bolt://' prefix and format the URL with credentials
+ config.DATABASE_URL = f"bolt://{NEO4J_USER}:{NEO4J_PASSWORD}@{DATABASE_URL}"
diff --git a/src/graph/schema.py b/src/graph/schema.py
index c4412ac..fe8b679 100644
--- a/src/graph/schema.py
+++ b/src/graph/schema.py
@@ -1,6 +1,8 @@
# schema.py
+import json
from datetime import datetime
-from typing import List, Dict, Any
+from typing import List, Union, Any, Dict
+import numpy as np
from neomodel import (StructuredNode, StructuredRel, StringProperty,
IntegerProperty, FloatProperty, DateTimeProperty,
@@ -25,16 +27,42 @@ class RelatedToRel(StructuredRel):
topic_trend_similarity = FloatProperty()
+class HasRel(StructuredRel):
+ pass
+
+
+class Platform(StructuredNode):
+ platform_id = StringProperty()
+ name = StringProperty()
+ description = StringProperty()
+ contact_email = StringProperty()
+ webhook_url = StringProperty()
+ auth_token = StringProperty()
+
+ # Relationships
+ channels = RelationshipFrom('Channel', 'ON_PLATFORM')
+
+ # Wrapper Functions
+ @classmethod
+ def create_platform(cls, platform_id: str, name: str, description: str,
+ contact_email: str, webhook_url: str) -> 'Platform':
+ return cls(platform_id=platform_id,
+ name=name,
+ description=description,
+ contact_email=contact_email,
+ webhook_url=webhook_url).save()
+
+ def add_channel(self, channel_id: str):
+ self.channels.connect(Channel(channel_id=channel_id))
+
+
# Nodes
class Channel(StructuredNode):
- channel_id = UniqueIdProperty()
- platform = StringProperty()
+ channel_id = StringProperty()
name = StringProperty()
description = StringProperty()
created_at = DateTimeProperty(default_now=True)
- active_members_count = IntegerProperty()
language = StringProperty()
- region = StringProperty()
activity_score = FloatProperty()
# Relationships
@@ -46,25 +74,19 @@ class Channel(StructuredNode):
# Wrapper Functions
@classmethod
- def create_channel(cls, platform: str, name: str, description: str,
- active_members_count: int, language: str, region: str,
- activity_score: float) -> 'Channel':
- return cls(platform=platform,
+ def create_channel(cls, channel_id: str, name: str, description: str,
+ language: str, activity_score: float) -> 'Channel':
+ return cls(channel_id=channel_id,
name=name,
description=description,
- active_members_count=active_members_count,
language=language,
- region=region,
activity_score=activity_score).save()
def associate_with_topic(self, topic: 'Topic', channel_score: float,
- keywords_weights: List[str], message_count: int,
trend: str) -> None:
self.topics.connect(
topic, {
'channel_score': channel_score,
- 'keywords_weights': keywords_weights,
- 'message_count': message_count,
'last_updated': datetime.utcnow(),
'trend': trend
})
@@ -78,6 +100,7 @@ def add_semantic_vector(
class Topic(StructuredNode):
+ # Define properties
topic_id = UniqueIdProperty()
name = StringProperty()
keywords = ArrayProperty()
@@ -94,13 +117,18 @@ class Topic(StructuredNode):
# Wrapper Functions
@classmethod
- def create_topic(cls, name: str, keywords: List[str],
+ def create_topic(cls, name: str, keywords: list[dict[str, Any]],
bertopic_metadata: Dict[str, Any]) -> 'Topic':
"""
Create a new topic node with the given properties.
"""
+ keywords_json_ready = [{
+ "term": kw["term"],
+ "weight": float(kw["weight"])
+ } for kw in keywords]
+ keywords_json = json.dumps(keywords_json_ready)
return cls(name=name,
- keywords=keywords,
+ keywords=keywords_json,
bertopic_metadata=bertopic_metadata).save()
def relate_to_topic(self, other_topic: 'Topic', similarity_score: float,
@@ -130,15 +158,30 @@ def add_update(self, update_keywords: List[str],
update.topic.connect(self)
return update
- def set_topic_embedding(self, embedding: List[float]) -> None:
+ # Wrapper Functions
+ def set_topic_embedding(self, embedding: Union[List[float],
+ np.ndarray]) -> None:
"""
- Set the topic embedding vector, ensuring all values are floats.
+ Set the topic embedding vector, converting numpy.ndarray to a list of floats.
"""
+ # If it's a numpy array, convert it to a list
+ if isinstance(embedding, np.ndarray):
+ embedding = embedding.astype(float).tolist()
+
+ # Validate all elements are floats
if not all(isinstance(val, float) for val in embedding):
raise ValueError("All elements in topic_embedding must be floats.")
+
+ # Save the list representation
self.topic_embedding = embedding
self.save()
+ def get_topic_embedding(self) -> np.ndarray:
+ """
+ Retrieve the topic embedding as a numpy array.
+ """
+ return np.array(self.topic_embedding, dtype=float)
+
class TopicUpdate(StructuredNode):
update_id = UniqueIdProperty()
diff --git a/src/openapi_server/impl/channels_api.py b/src/openapi_server/impl/channels_api.py
index d645fd1..ad8e689 100644
--- a/src/openapi_server/impl/channels_api.py
+++ b/src/openapi_server/impl/channels_api.py
@@ -73,10 +73,18 @@ async def post_channel_messages(
) -> ChannelMessagesResponse:
"""Processes a message feed from a specified channel and updates associated topics."""
topic_model = ModelManager.get_model()
- processed_count, error = concord(topic_model,
- channel_messages_request.messages)
+
+ # Add channel_id to the function call, keep original return variables
+ processed_count, error = concord(
+ topic_model,
+ channel_id,
+ platform_id,
+ channel_messages_request.messages,
+ )
+
if processed_count == -1:
return ChannelMessagesResponse(status="error", error=error)
+
return ChannelMessagesResponse(platform_id=platform_id,
channel_id=channel_id,
processed_messages=processed_count,
diff --git a/tests/Research Channels Message Upload/Custom channels/Custom message.bru b/tests/Research Channels Message Upload/Custom channels/Custom message.bru
index 136a110..070fcd0 100644
--- a/tests/Research Channels Message Upload/Custom channels/Custom message.bru
+++ b/tests/Research Channels Message Upload/Custom channels/Custom message.bru
@@ -18,8 +18,69 @@ headers {
body:json {
{
"messages": [
- "I made concord a nice AI plugin for Edge city",
- "Nice, do you use ChatGPT ?"
+ "What's the structure of the Edge City Lanna Hackathon? Are remote participants welcome?",
+ "Absolutely! The hackathon is open to remote and in-person participants, and there’s a demo day at the end!",
+ "Does the hackathon cover tools for both community building and AI development?",
+ "Yes, we're focusing on tools for human flourishing, with sessions on AI, decentralized governance, and community tech.",
+ "Is there a timeline for sessions on privacy tech during the hackathon?",
+ "Privacy tech sessions will be on day 3 and day 5, with workshops covering encryption and AI-driven insights.",
+ "What's the purpose of the 'Future Cities' event in Edge City?",
+ "Future Cities explores sustainable city models and decentralized governance structures that might shape future societies.",
+ "Can people suggest topics for the Future Cities session?",
+ "Yes, we’re open to suggestions! Just drop them in the Social Layer under #FutureCities.",
+ "How does Concord handle cross-platform communication between Matrix, Discord, and Telegram?",
+ "Concord aggregates discussions across platforms, helping communities stay aligned while respecting privacy.",
+ "I’m new here. What's the Edge City Weekly Assembly?",
+ "It’s a community gathering where we share updates, discuss projects, and plan next steps together.",
+ "Is there a way to view previous Assembly discussions?",
+ "Yes, all past Assembly notes are saved in the community Notion workspace for everyone to access.",
+ "What’s unique about the Concord plugin compared to other communication tools?",
+ "Concord uses AI to map topics across channels, connecting discussions and reducing noise to help communities focus.",
+ "How does Concord support DAOs and self-organized groups?",
+ "Concord has voting and polling features and encourages transparent decision-making in decentralized groups.",
+ "Are there age-friendly activities at Edge City?",
+ "Yes! Kids House has morning meetups, and there are family-friendly sessions like yoga and art workshops.",
+ "Does Concord offer support for multilingual communities?",
+ "That’s on our roadmap. We want to support global communities with translation tools for key discussions.",
+ "What is 'Trade-Off Lab: Tensions Game'?",
+ "It’s an interactive workshop where participants discuss trade-offs and tensions in decision-making frameworks.",
+ "Is Concord planning any blockchain integration for its community platform?",
+ "Yes, we’re exploring Ethereum-based authentication and might add other blockchains as we grow.",
+ "What’s the timeline for the Stablecoins 3.0 event?",
+ "Stablecoins 3.0 runs over three days, covering payment systems, regulations, and identity use cases.",
+ "How does Concord approach data privacy for its users?",
+ "Data is encrypted end-to-end, with only necessary data stored temporarily, maintaining a decentralized structure.",
+ "Does Concord work on mobile?",
+ "A mobile-friendly version is planned! For now, it’s optimized for web but usable on mobile browsers.",
+ "Can Concord be used in disaster response scenarios for real-time updates?",
+ "Yes, Concord’s quick topic mapping can help organize emergency response discussions securely.",
+ "What’s the purpose of the 'Propel' initiative?",
+ "Propel helps community members fund events by crowdfunding through upfront commitments.",
+ "How can I create a proposal on Propel?",
+ "Just go to propelevents.io, outline your project, and attendees can bid to support it!",
+ "Is there a cost to using Concord, or is it fully open-source?",
+ "It’s fully open-source, with no core costs. Custom integrations may have separate costs.",
+ "What’s unique about Concord’s approach to community engagement?",
+ "Concord isn’t about broadcasting; it’s about enabling structured, decentralized community collaboration.",
+ "Are there any book clubs or themed discussions here?",
+ "Yes! There's a book club discussing 'Beginning of Infinity' and a variety of themed sessions.",
+ "How often will Concord receive updates?",
+ "We’re aiming for quarterly updates based on community feedback.",
+ "Will Concord integrate with tools like Trello for project management?",
+ "We’re planning integrations to connect with project management tools, making it easier for communities to organize.",
+ "Can Concord be customized for specific community needs?",
+ "Yes! Being open-source, communities can modify it to fit their own requirements.",
+ "What types of discussions are part of 'Second Renaissance Symposium'?",
+ "This symposium covers art, philosophy, and futuristic societal structures in collaborative settings.",
+ "How are hackathon projects presented at the end?",
+ "Participants showcase their projects on Demo Day, with a panel of mentors offering feedback.",
+ "What’s the difference between Concord and traditional social media?",
+ "Concord emphasizes privacy and collaboration, without the broadcasting and ad-focused design of social media.",
+ "Is there a roadmap for Concord’s future features?",
+ "Yes, we’re looking to add multilingual support, more integrations, and enhanced topic mapping.",
+ "How can people without technical skills contribute to the hackathon?",
+ "Non-technical members can join teams or participate in sessions on ideation and project brainstorming."
]
}
+
}