Building Masite: A Portfolio Website Builder — Part 6: Servers
This 8 part series focuses on building Masite: A Portfolio Website Builder. Link to the website: https://masite-portfolio-website-builder.vercel.app/
Hi there, I am Mrinal Prakash. I am a Software Developer with a passion for building efficient, scalable, and user-friendly web applications.
In the previous parts of this series, we designed the APIs for the users and their details in order to add, list, update and delete from the database.
Now in this part we will focus on creating a server through which when we provide a link, it would be able to fetch all the blog posts which are present inside that link using RSS Feed.
What Are RSS Feeds?
RSS (Really Simple Syndication) is a type of web feed that allows users and applications to access updates to websites in a standardized, computer-readable format. An RSS feed provides a way to automatically gather content from a site, such as blog posts, and display them in an aggregated format.
Many blogging platforms generate RSS feeds for user profiles or blogs, providing metadata such as:
- The title of the post
- A short snippet or description
- The publication date
- A link to the full article
For our project, we will tap into this functionality to allow Masite users to integrate their external blog content into their portfolios.
Why Use RSS Feeds for Blog Content?
In the context of portfolio websites, showcasing your latest blog posts is essential. However, maintaining the same blog content across multiple platforms and manually updating your portfolio each time a new post is published can be cumbersome. This is where RSS feeds come in handy.
Benefits of Using RSS Feeds:
- Automated Updates: Automatically fetch new blog posts when they are published without needing manual input.
- Platform Flexibility: Users can pull content from multiple platforms (e.g., Medium, Dev.to, Hashnode) and display it in one unified interface.
- Standardized Format: RSS feeds provide structured data that can be easily parsed and displayed in a consistent way on the portfolio site.
In this part of the series, we will set up a server that can dynamically retrieve and present blog content from various platforms using their RSS feeds.
How RSS Parsing Works
RSS Parsing involves reading an XML file (the feed) and extracting structured data such as article titles, descriptions, publication dates, and URLs. We can then take this data and display it in a user-friendly manner on our portfolio.
For this tutorial, we’ll focus on three platforms:
- Medium
- Dev.to
- Hashnode
Each of these platforms provides an RSS feed, and while the general structure of RSS is standard, there are some differences in how the platforms structure their content. We’ll account for these differences by creating platform-specific parsers.
Step-by-Step Guide to Building the Server
Let’s get started by building the server that will process incoming requests for blog posts based on a user’s provided feed link.
1. Installing the RSS Parser Library
First, we need to install the rss-parser
library, which will handle the heavy lifting of parsing the RSS feed URLs. To install it, run the following command in your terminal:
npm install rss-parser
The rss-parser
library will convert RSS feed data into a JSON format that we can easily work with in our application.
2. Implementing Parsers for Medium, Dev.to, and Hashnode
Each of the platforms (Medium, Dev.to, and Hashnode) generates RSS feeds that are structured slightly differently. Therefore, we need to create parsers for each platform that correctly handles the structure of the feed and extracts the relevant information (e.g., title, description, date, link).
Below is the code for parsing each of these platforms’ RSS feeds:
Medium RSS Parser
Medium’s RSS feeds contain a special field called content:encodedSnippet
for the description of the post. We extract this along with the title, publication date, and link.
import Parser from 'rss-parser';
const parser = new Parser();
export const mediumRSSParser = async (link) => {
const feedResponse = await parser.parseURL(link);
const articles = feedResponse.items.map(item => ({
title: item.title,
description: item['content:encodedSnippet'], // Extract the post snippet
datePublished: item.pubDate,
link: item.link
}));
return articles;
};
Dev.to RSS Parser
Dev.to uses a more standard RSS format, where the description is stored in contentSnippet
. The parser will extract the relevant fields in the same way.
export const devRSSParser = async (link) => {
const feedResponse = await parser.parseURL(link);
const articles = feedResponse.items.map(item => ({
title: item.title,
description: item.contentSnippet, // Extract the post snippet
datePublished: item.pubDate,
link: item.link
}));
return articles;
};
Hashnode RSS Parser
Hashnode’s RSS feed structure stores the post description in the description
field. Like the others, we extract the key fields.
export const hashNodeRSSParser = async (link) => {
const feedResponse = await parser.parseURL(link);
const articles = feedResponse.items.map(item => ({
title: item.title,
description: item.description, // Extract the post snippet
datePublished: item.pubDate,
link: item.link
}));
return articles;
};
3. Creating a Provider Class to Fetch Articles
We’ll now build a class called ArticlesProvider
that fetches articles from different blogging platforms based on the provided feed URL. This class will utilize the parsers we created earlier.
const {mediumRSSParser, devRSSParser, hashNodeRSSParser} = require('./FeedParser')
class Article {
constructor(title, description, datePublished, link) {
this.title = title;
this.description = description;
this.datePublished = datePublished;
this.link = link;
}
}
export default class ArticlesProvider
{
constructor(link)
{
this.link=link;
}
fetchArticles = async() => {
if(this.link && this.link.startsWith("https://medium.com"))
{
return await mediumRSSParser(this.link);
}
else if(this.link && this.link.startsWith("https://dev.to"))
{
return await devRSSParser(this.link);
}
else if(this.link && this.link.startsWith("https://hashnode.com"))
{
return await hashNodeRSSParser(this.link);
}
}
}
The ArticlesProvider
class is responsible for detecting the platform based on the URL structure and invoking the appropriate RSS parser.
4. Implementing the API Endpoint
Now that we have the RSS parsers and the ArticlesProvider
class in place, we need to set up an API endpoint that will handle HTTP requests and return the parsed articles.
Here’s how we create an API route in Next.js to handle fetching the articles:
import ArticlesProvider from '@/pages/services/ArticlesProvider'
export default async function handler(req, res) {
const { feedLink } = req.body;
console.log('Request Body', req.body)
let provider = new ArticlesProvider(feedLink);
const articles= await provider.fetchArticles();
res.status(200).json(articles);
}
This endpoint listens for POST requests containing a feedLink
in the request body. Once the feed link is received, the ArticlesProvider
class fetches the articles, and the server responds with the articles in JSON format.
5. Testing the API
To test our API, we can use a tool like Insomnia or Postman. Here’s how you can test it:
- Set up a POST request to
http://localhost:3000/api/articles
. - Add a JSON body containing a feed link, like this:
{
"articleFeedLink": "https://medium.com/feed/@yourusername"
}
Send the request, and if everything is set up correctly, you’ll get a list of articles returned in the response.
Error Handling and Improvements
While we have the basic functionality in place, there are a few considerations for improving the robustness and user experience of our server.
1. Error Handling
We should ensure that the server gracefully handles invalid URLs or unsupported platforms. This can be done by adding appropriate error messages in the ArticlesProvider
class.
if (!this.link) {
throw new Error('Feed link is required');
} else if (!this.link.startsWith("https://medium.com") && !this.link.startsWith("https://dev.to") && !this.link.startsWith("https://hashnode.com")) {
throw new Error('Unsupported feed platform');
}
2. Caching
Fetching articles directly from the RSS feed every time a request is made can be inefficient. To improve performance, consider caching the results of the feed for a short period (e.g., 10 minutes) to reduce the load on the server and RSS endpoints.
3. Pagination
If the RSS feed contains a lot of articles, we may want to paginate the results to make the response more manageable. This can be done by implementing pagination on both the server and front-end.]
Final Steps
Now when you log on to the website, you would be able to see the following screen:
Here the fetched articles are being listed.
Conclusion
In this part, we built a server that can fetch blog articles from different platforms using RSS feeds. This will allow users of our Masite Portfolio Builder to display their blog content from various platforms like Medium, Dev.to, and Hashnode directly on their portfolio.
Next up in Part 7, we’ll focus on authentication. We’ll implement NextAuth.js for managing user sessions and protecting the portfolio content, ensuring only the rightful user can modify their portfolio.
If you enjoyed this article, feel free to connect with me on LinkedIn and follow me on Medium for more content on building scalable web applications.