AI-Powered Personalized News Aggregation
Scrape, analyze, and curate custom news articles, and see what others are interested in!
- Quick Start
- Project Structure
- Development
- Customization
- API Endpoints
- Environment Variables
- Contributing
- Tech Stack
- Roadmap
- License
Notes
- Hey all! This is Reagan. Welcome to the News Use repo! As of Oct 1. 2025, we aim to make this app more comprehensive and span across even more sources! One thing we loved about this project is that you get to see articles that other people made :D have fun generating!
# Install frontend dependencies
cd news-use
npm install
# Set up environment
cp .env.example .env.local
# Add your Convex URL (auto-generated on first run)
# Run development server
npm run devThe app will open at http://localhost:5173
cd backend
# Set up Python environment
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Set up environment
cp .env.example .env.local
# Add your GOOGLE_API_KEY and BROWSER_USE_API_KEY
# Run backend server
python api.pyThe API will run at http://localhost:8000
news-use/
├── news-use/ # React frontend
│ ├── src/
│ │ ├── components/ # UI components
│ │ ├── lib/ # API client utilities
│ │ └── App.tsx # Main application
│ └── convex/ # Convex backend
│ ├── schema.ts # Database schema
│ └── newspapers.ts # CRUD operations
└── backend/ # FastAPI server
├── api.py # Main API
├── news_scrapers/ # NYT & WashPost scrapers
└── elaborators/ # AI summarization
cd news-use
npm run dev # Start both frontend and Convex
npm run dev:frontend # Start only frontend
npm run dev:backend # Start only Convex
npm run build # Production build
npm run lint # Run lintercd backend
# Using uv (recommended)
uv pip install -r requirements.txt
python api.py
# Access API docs
# http://localhost:8000/docsAccess Convex Dashboard to view/edit data:
cd news-use
npx convex dashboard- Create Scraper (
backend/news_scrapers/source.py):
from browser_use_sdk import BrowserUse
from .models import Articles
client = BrowserUse(api_key=os.getenv("BROWSER_USE_API_KEY"))
def search_source(query: str) -> dict:
task = client.tasks.create_task(
task=f"Search for articles about: {query}",
llm="gemini-flash-latest",
schema=Articles,
)
result = task.complete()
return result.output- Update API (
backend/api.py):
from news_scrapers.source import search_source
@app.post("/search/source")
async def search_source_endpoint(search: SearchQuery):
return search_source(search.query)- Update Frontend (
news-use/src/lib/api.ts):
export async function searchSource(query: string) {
const response = await fetch(`${API_BASE_URL}/search/source`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ query }),
});
return response.json();
}- Add to QueryInput (
news-use/src/components/QueryInput.tsx):
const sourceResponse = await searchSource(query);
allArticles.push(...sourceResponse.articles);Edit the prompt in backend/elaborators/summarize_all.py (line ~44):
prompt = f"""
Based on these news articles, provide:
1. Your custom analysis requirements
2. Additional context you want
3. Specific focus areas
...
"""Modify sections in news-use/src/components/NewspaperDetail.tsx:
const parseSummary = (content: string) => {
// Add your custom section parsing logic
const customMatch = formatted.match(/5\.\s*Your Section[^]*/i);
if (customMatch) {
sections.push({
title: "Your Custom Section",
content: customMatch[0],
key: "custom"
});
}
};POST /search/nyt- Search New York TimesPOST /search/washpost- Search Washington PostPOST /summarize- Summarize articles with AIGET /health- Health checkGET /docs- Interactive API documentation
createNewspaper- Save newspaper to databaselistNewspapers- Get public newspapersgetNewspaper- Get specific newspaper by IDgetStats- Get aggregated statistics
CONVEX_DEPLOYMENT=dev:your-deployment-name
VITE_CONVEX_URL=https://your-deployment.convex.cloud
VITE_API_URL=http://localhost:8000 # Optional, defaults to localhost:8000GOOGLE_API_KEY=your_google_api_key
BROWSER_USE_API_KEY=your_browser_use_api_keyIf deploying to production, update CORS in backend/api.py:
app.add_middleware(
CORSMiddleware,
allow_origins=["https://your-domain.com"], # Replace *
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)- Fork the repository
- Create feature branch (
git checkout -b feature/amazing) - Commit changes (
git commit -m 'Add feature') - Push branch (
git push origin feature/amazing) - Open Pull Request
- Frontend: React, TypeScript, Vite, Tailwind CSS v4
- Backend: Convex, FastAPI, Render
- AI: Google Gemini Flash (summarization)
- Scraping: Browser Use SDK (automated browsing)
- Add more news sources (Reuters, Bloomberg, BBC)
- News analytics (sentiment analysis, trends, etc.)
- Email notifications for new articles
- Advanced filtering and search
MIT - See LICENSE file for details
Built with Browser Use - Automated web scraping powered by AI
Made with ❤️ by Reagan + Shawn + Browser Use