Skip to main content

· 7 min read
Padmaja Bhol

Project Description 📌

Scientific code needs to be open source in order to be useful and peer-reviewed. Code-is-Science aims to help researchers find journals that require open source code easily. Through Journal Policy Tracker, users can submit journals for code-sharing policy tracking. As a part of Google Summer of Code 2022, I got an opportunity to work on Journal Policy Tracker under Open Bioinformatics Foundation. OBF is an umbrella organization that supports small-sized organizations participating in various mentorship initiatives.

How the project is supposed to work

The project should have auth components to log in/register a user, and then the particular user can add journal policies stored in a database. The user should also be able to edit and delete a specific journal they created at any time. The project should also have a page displaying all journals and a separate component to display an individual journal. Particularly for the auth component, there should be a method to verify the user and prevent spam. Along with this, there should be a component to display user roles- if a user logged in is an admin or not.

Tech Stacks Used

For the frontend library, we decided to go with React.js because of its flexibility and because we could use it to build quality user interfaces.

PurposeTools and Technologies used
Frontend frameworkReactjs
DesigningFigma
Package ManagerYarn
CSS LibraryStyled Components, Vanilla CSS
Graphql ClientApollo Client
TestingCypress
State-managementuseReducer
Repositoryhttps://github.com/codeisscience/journal-policy-tracker-frontend/tree/gsoc22

Community Bonding 🫶🏽

Mentors & Co-Mentee

I can't thank my project mentors, Yo Yehudi and Isaac Miti, who helped me navigate through these three months. They were understandable throughout the period, given that I was a full-time university student during my term.

Apart from this, I could also like to thank my co-mentee, Devesh, who helped me a lot, especially during the backend integration, since this was my first time working with a GraphQL backend, and I had a lot of surprises on the way 👀.

Community Bonding Period

I took the first few days to set up the project, create a separate branch on the main repository to push all of my commits and most importantly, design all the components that I had to develop during GSoC.

Subsequently, I got on a call with my co-mentee where we discussed everything we had on our mind regarding how we could shape the project.

Phase 1

Restructuring

For the first task, I restructured the project directory as given below:


└── journal-policy-tracker-frontend/
├── .github/
├── issue/pr templates
├── src/
├── config
├── website content
├── components (separate files for each component, i.e authenticate page,
Add Journal, Edit Journal, Journals, etc. Shared components i.e buttons, layouts.)
├── pages
├──index.js page and Home page
├── context (containing useContext and useReducer hooks as well as the states)
├── graphql (Containing all mutations and queries)
├── utils
├── App.js
├── css files

I then moved all of our image assets to Cloudinary instead of storing them in the project directory. This not only helps us to reduce the size of the repository but also helps us to optimize our images, and in general, a good industry practice.

Then, I removed components built with react-bootstrap and converted them to just vanilla CSS, as mentioned by my mentors, so that we could customize as much as possible while keeping the list of dependencies low. This took some time, and we mostly ended up with new designs for our footer, navbar, and landing pages.

Auth components

The next week was spent building the auth components, that consisted of signup and a login pages with all the validation checks!

Sign Up: Sign Up

Log In: Log In

JSON server and the Journal List component, migrating Yarn.

I then built a component to display the entire list of journals stored in the database. I also used a JSON server for the mock backend that stored the dummy data. Previously, we used NPM as the package manager, but I suggested shifting to Yarn since the backend had used Yarn, and it has quite some reputation for being a better package manager. Journal List

Policy details component

Next, I worked on making a component that could fetch and display all the policy details when clicking on a particular journal. This involved researching the various policies that come with open source scientific journals. We currently have eight fields.

  1. First Year
  2. Policy Title
  3. Policy Type
  4. Enforced
  5. Data Availability Statement Published
  6. Data Peer Reviewed
  7. Data Shared
  8. Enforced Evidence Details

Add Journal Component

Developed a component to help users add a journal and its policies to the database. Add Journal

Crud operations, searbar

Developed components to edit and delete a particular journal. Made a searchbar for the Journal List page that could could search for a journal either by its title or ISSN number.

PRs:

  1. https://github.com/codeisscience/journal-policy-tracker-frontend/pull/163
  2. https://github.com/codeisscience/journal-policy-tracker-frontend/pull/165
  3. https://github.com/codeisscience/journal-policy-tracker-frontend/pull/168
  4. https://github.com/codeisscience/journal-policy-tracker-frontend/pull/170
  5. https://github.com/codeisscience/journal-policy-tracker-frontend/pull/173
  6. https://github.com/codeisscience/journal-policy-tracker-frontend/pull/175
  7. https://github.com/codeisscience/journal-policy-tracker-frontend/pull/177
  8. https://github.com/codeisscience/journal-policy-tracker-frontend/pull/178
  9. https://github.com/codeisscience/journal-policy-tracker-frontend/pull/179
  10. https://github.com/codeisscience/journal-policy-tracker-frontend/pull/186

Phase 2

State Management

A heavy package like Redux wasn't essential, and neither is prop drilling and having multiple useStates a good industry practice. To solve this problem, I took the help of useContext and useReducer hooks to set up global state management. At the end of the day, react is a state management library 😉.

Modularization

Created common components like layouts, texts, and buttons that could be used everywhere. This practice can accelerate the development process and is generally a good practice.

GraphQL integration!

After the backend was ready for integration, I deleted the mock JSON server and changed how things worked in the app. For a REST API, you mostly need a single call to fetch the data for all your components, i.e., the Journal List and the Journal Policy Detail. But in that process, you end up over fetching data from the API that you might not need to render on the web page. So what do we do? For this particular reason, my mentors had decided to move to GraphQL. GraphQL gives you precisely what you're asking for when you query with a single POST/GET request. But that also meant we had to make separate calls for separate components.

Query for Journal List:

 query GetAllJournals($currentPageNumber: Int!, $limitValue: Int!) {
getAllJournals(currentPageNumber: $currentPageNumber, limitValue: $limitValue) {
journals {
id
title
url
issn
domainName
createdAt
updatedAt
createdBy
}
totalJournals
}
}

Query for Journal Policy Details:

  query GetJournalByISSN($issn: String!) {
getJournalByISSN(issn: $issn) {
id
title
url
issn
domainName
policies {
title
firstYear
lastYear
policyType
isDataAvailabilityStatementPublished
isDataShared
isDataPeerReviewed
enforced
enforcedEvidence
}
createdAt
updatedAt
createdBy
}
}

The entire process took quite some time and was quite challenging, but I completed it with my fellow co-mentee's help!

User profile and log out component

Created a component to display the user logged in and a button to log out and delete the cookie. User Profile

PRs:

  1. https://github.com/codeisscience/journal-policy-tracker-frontend/pull/189
  2. https://github.com/codeisscience/journal-policy-tracker-frontend/pull/190
  3. https://github.com/codeisscience/journal-policy-tracker-frontend/pull/191
  4. https://github.com/codeisscience/journal-policy-tracker-frontend/pull/192
  5. https://github.com/codeisscience/journal-policy-tracker-frontend/pull/193
  6. https://github.com/codeisscience/journal-policy-tracker-frontend/pull/194

The next steps:

Though we could deliver almost all of the things we had initially proposed, there are still a few things that can be implemented to futher improve the project. This may include improving the authentication, code optimization, UI-UX enhancements, and a much-needed admin dashboard. We are also planning to onboard new contributors to work on beginner-friendly issues.

Final Thoughts

My experience throughout the mentorship was amazing, and I can't thank my mentors enough for making it all happen. I got to learn a lot of things, and it was challenging to manage full-time college during my GSoC term. I'm delighted to be a part of the Code Is Science team and will be contributing in the future too.

· 9 min read
Devesh

This article covers an overview of the community bonding period and the first month of the coding period that I experienced with my project, Journal Policy Tracker Backend under the Open Bioinformatics Foundation organization during the Google Summer of Code 2022.

About the Project

What is Journal Policy Tracker?

The Journal Policy Tracker is going to be a web app where anyone can look up details about the policy of a published journal as well as add policy details of a new journal in our database.

What is the Expected Output of the Project?

This output of this project is supposed to be a fully-fledged backend for the journal policy tracker. Currently, the backend of the journal policy tracker is on Flask and SQLite3. During the timeline of this GSoC project, I will be redoing the backend of the journal policy tracker from scratch by using ExpressGraphQLApollo Server, and MongoDB.

Meet and Greet

The start of my community bonding period was very exciting as I got to know that I got selected for GSoC this year. Through my mentors, I became aware of the Slack channel for my group called Code Is Science which we use for everyday communication among our team members.

During this community bonding period, I had two meet-and-greet video calls with my mentors. All the mentors in our group were extremely polite and supportive. They answered all the questions and doubts that I had regarding the project during our calls.

The first call was with Pritish Samal who is my immediate mentor for the backend project. The second call was a combined meet and greet with the entire Code Is Science team that consisted of our group Mentor as well as the frontend mentor and mentee.

Discussions

During our meeting, we discussed that we will try to complete the entire backend as soon as possible so once it's done then we can give more time towards integrating the frontend and backend without any hassle. We also decided that we are going to keep track of the timeline and developments of this project in Notion.

Pritish also explained me the concept of Conventional Commits and suggested that we should follow it in our project. Conventional Commits is basically a set of rules/syntax for writing commit messages in git. Using Conventional Commits, the reader can get valuable context about a commit such as whether a commit includes a new feature, bug fixes, refactors, tests etc. Those commit messages will also used for implementing Semantic Versioning in our project.

Researching

During this time, I went through a significant amount of documentation for Express, Apollo Server, GraphQL, mongoose, and MongoDB as well as I went through numerous amounts of YouTube tutorials to better understand the tech stack which is going on to be used in the project.

Boilerplate for our Project

After I had a fair amount of discussion with my mentors and the team, I had a decent idea of what the project is supposed to be at the end. With all that in mind, I started my work on this project. My first pull request was a boilerplate for the project with appropriate folder structure. In that PR, I also created a simple working backend with minimal functions.


Working on the CRUD API for Journals

The first week began with me working on the Journal CRUD API. CRUD stands for Create, Read, Update and Delete.

My goal for this week was to finish the functional part of the CRUD API where we can Create, Read, Update and Delete journal entities from our database.

The journal entity did not yet contain all the fields because we still needed to have some discussions about the contents of a journal before we can finalize the schema.

Later this week we decided upon the final schema for the journal entity. It is supposed to look something like this:

title: String;

url: String;

issn: String;

domainName: String;

policies: title: String;
firstYear: Number;
policyType: String;
isDataAvailabilityStatementPublished: Boolean;
isDataShared: Boolean;
isDataPeerReviewed: Boolean;
enforced: Boolean;
enforcementEvidence: String;

createdAt: Date;

updatedAt: Date;

createdBy: User;

We could have minor changes as we go along the development timeline but it will look similar to this.

This week I also faced a problem while making pull requests. My mentor Pritish helped me with my problem and explained to me the proper workflow that I should be following while making pull requests so I don’t get errors.

At the end after solving the pull request problem, I made a pull request which added a basic journal CRUD API to our project. More work will be done on it in forthcoming pull requests which will be documented later.

Working on the User Authentication System

After putting together a basic working journal CRUD I started working on the User Authentication System of our project.

At first, we decided that we are going to use Passport.js to implement authentication in our project as it was the de-facto library that was used by everyone for this purpose. But as I did more research about it, I realized that it doesn’t go too well together with GraphQL. If we are only using the local strategy (basic username and password stored in a database) and not using Google or Facebook strategies then dropping Passport.js and handling authentication inside the GraphQL resolvers will be the most convenient and easiest to maintain.

So I decided to go forward with that approach and created a simple authentication system with the GraphQL resolvers.

The goal of the user authentication as of now is to create a system where a user can register a new account, login into that account, and stay logged in. To keep the user logged in, we decided that we will use the sessions and cookies approach.

Hashing and Salting

I researched a good amount over hashing libraries and ended up choosing bcrypt because it is currently the most popular library that is used for this purpose and it is very easy to implement. While researching I found this video which was extremely informative that explained the functionality and implementation of bcrypt in good detail. That helped me a lot.

Error Handling

After the implementation of all the essentials, I implemented error handling. It is supposed to throw up an appropriate error if someone enters the wrong credentials during the register or login process. I implemented them by using simple conditional statements inside the GraphQL resolvers which will throw up an appropriate error message depending upon the error code and error field.

After finishing this, I made a pull request which added a working user authentication system to our project with hashing and salting of passwords.

Sessions

Implementation of Sessions

Implementing sessions was very straightforward in our express app with the use of express-session. I simply followed the express documentation and implemented session like the following:

app.use(
session({
name: COOKIE_NAME,
store: new RedisStore({
client: redis,
disableTouch: true,
}),
cookie: {
maxAge: 1000 * 60 * 60 * 24 * 365 * 1, // 1 year
httpOnly: true,
sameSite: 'none',
secure: true,
},
saveUninitialized: false,
secret: process.env.SESSION_SECRET,
resave: false,
}),
);

Session data is fetched whenever a particular user executes an action. This is done so that the server knows which user executed what particular action. In this use case, this data is going to be fetched extremely frequently and at a very fast rate. To make sure that the data is always available with the least amount of delay, we are going to use an in-memory database called Redis. Redis stores all it’s data in memory(RAM) so the fetching delay is extremely small which results in really fast experiences for the end user.

Problems While Enabling Cookies

I have almost finished the work of implementing sessions on the backend. While I didn’t face a lot of challenges while implementing sessions itself, I had a few problems while enabling cookies with the Apollo Server.

Enabling cookies in GraphQL Playground was easier but since it has been retired and replaced by Apollo Server, enabling cookies has been comparatively more difficult.

To enable cookies in GraphQL Playground, aside from configuring CORS properly we only had to change one value in settings which was to change the value of request.credentials to include.

To enable cookies in Apollo Server, we have to do the following extra steps:

  1. In cookies settings we set sameSite to "none" and secure to true
  2. Go to settings in Apollo Studio, turn on “Include Cookies” and add a new shared header with header key as x-forwarded-proto and header value as https
  3. Add app.set("trust proxy", 1) just above our CORS config.

Dynamic Origin CORS

For the purpose of development, I needed to have multiple origins connect to the backend. One of them will connect to the backend from the Apollo Studio and another will connect from the frontend that I will locally work upon for testing purposes. For that, I used the CORS node package which supported dynamic origins.

To allow multiple origins on my backend, I used the following code:

var whitelist = ['https://studio.apollographql.com', 'http://localhost:3000'];

var corsOptions = {
origin: function (origin, callback) {
if (whitelist.indexOf(origin) !== -1 || !origin) {
callback(null, true);
} else {
callback(new Error('Not allowed by CORS'));
}
},
credentials: true,
};

After the session PR is merged, I will work towards integrating the user entity and the journal entity so that every journal will contain the ID of the user who created it which we can use to fetch the user details.

FIN

So that was all about my project, the community bonding period, and the first month of my GSOC 2022. I want to thank my mentors and OBF for believing in me to work on this project. In the forthcoming PRs, I will be adding more features and error handling in the journal CRUD API and user authentication API. I will also be adding an authorization middleware and refactoring some code to make it easier to read and understand. I’m excited to complete the work on this project and get it up and running for people to use. I will be writing more articles in the future and documenting the entire process of my project development and completion.