Re-thinking STARLIMS architecture

Re-thinking STARLIMS architecture

There is something about STARLIMS that has been bugging me for a long time. Don’t get me wrong – I think it is a great platform. I just question the wellness of XFD in 2024, and the selection of Sencha for the HTML part of it.

But an even more critical point: I question the principle of using the same server for the “backend” and the “frontend”. Really, the current architecture of STARLIMS (in a simplified way) is something like this:

Sure, you can add load balancers, multiple servers, batch processors… But ultimately, the Server’s role is both backend and Web Rendering, without really following Server-Side-Rendering (SSR) pattern. It hosts / provides the code to render from backend and let client do rendering. So, in fact, it is Client-Side-Rendering (CSR) with most of the SSR drawbacks.

This got me thinking. What if we really decoupled the front end from the backend? And what if we made this using real micro services? You know, something like this:

Let me explain the layers.

React.js

React does not need presentation. The infamous open-source platform behind Facebook. Very fast and easy, huge community… Even all the AI chatbot will generate good React components if you ask nicely! For security, it’s like any other platform; it’s as secure as you make it. And if you pair it with Node.js, then it’s very easy, which brings me to the next component…

Node.js

Another one in no need of presentation. JavaScript on a backend? Nice! And there, on one end, you handle the session & security (with React) and communicate with STARLIMS through the out of the box REST API. Node can be just a proxy to STARLIMS (it is the case currently) but should also be leveraged to extend the REST APIs. It is a lot easier to implement new APIs and connect to STARLIMS (or anything else for that matter!) and speed up the process. Plus, you easily get cool stuff like WebSockets if you want, and you can cache some reference data in Redis to go even faster!…

Redis

Fast / lightweight / free cache (well, it was when I started). I currently use it only for sessions; since REST API is stateless in STARLIMS, I manage the sessions in Node.js, and store them in Redis, which allows me to spin multiple Node.js instances (load balancing?) and share sessions across. If you don’t need to spin multiple proxy, you don’t need this. But heh, it’s cooler with it, no?

I was thinking (I haven’t done anything about this yet) to have a cron job running in Node.js to pull reference data from STARLIMS (like test plans, tests, analytes, specifications, methods, etc) periodically and update Redis cache. Some of that data could be used in the UI (React.js) instead of hitting STARLIMS. But now, with the updated Redis license, I don’t know. I think it is fine in these circumstances, but I would need to verify.

… BUT WHY?

Because I can! – Michel R.

Well, just because. I was learning these technologies, had this idea, and I just decided to test the theory. So, I tried. And it looks like it works! There are multiple theoretical advantages to this approach:

  1. Performance: Very fast (and potentially responsive) UI.
  2. Technology: New technology availability (websockets, data in movement, streaming, etc.).
  3. Integration: API first paradigm, Node.js can make it really easy to integrate with any technology!
  4. Source control: 100% Git for UI code, opening all git concepts (push, pull requests, merge, releases, packages, etc.).
  5. Optimization: Reduce resource consumption from STARLIMS web servers.
  6. Scalability: High scalability through containerization and micro-services.
  7. Pattern: Separation of concerns. Each component does what its best at.
  8. Hiring – there is a higher availability of React.js and Node.js developers than STARLIMS developers!

Here’s some screenshots of what it can look like:

As you can see, at this stage, it is very limited. But it does work, and I like a couple of ideas / features I thought of, like the F1 for Help, the keyboard shortcuts support, and more importantly, the speed… It is snappy. In fact, the speed is limited to what the STARLIMS REST API can provide when getting data, but otherwise, everything else is way, way faster than what I’m used to.

How does it work, really?

This is magic! – Michel R.

Magic! … No, really, I somewhat “cheated”. I implemented a Generic API in the STARLIMS REST API. This endpoint supports both ExecFunction and RunDS, as well as impersonation. Considering that the REST API of STARLIMS is quite secure (it uses anti-tampering patterns, you can ask them to explain that to you if you want) and reliable, I created a generic endpoint. It receives a payload containing the script (or datasource) to run, with the parameters, and it returns the original data in JSON format.

Therefore, in React, you would write code very similar to lims.CallServer(scriptName, parameters) in XFD/Sencha.

Me being paranoid, I added a “whitelisting” feature to my generic API, so you can whitelist which scripts to allow running through the API. Being lazy, I added another script that does exactly the same, without the whitelisting, just so I wouldn’t have to whitelist everything; but hey, if you want that level of control… Why not?

Conclusion

My non-scientific observations are that this works quite well. The interface is snappy (a lot faster than even Sencha), and developing new views is somewhat easier than both technologies as well.

Tip: you can just ask an AI to generate a view in React using, let’s say, bootstrap 5 classNames, and perhaps placeholders to call your api endpoints, et voilร ! you have something 90% ready.

Or you learn React and Vite and you build something yourself, your own components, and create yourself your own STARLIMS runtime (kind-of).

This whole experiment was quite fun, and I learned a ton. I think there might actually be something to do with it. I invite you to take a look at the repositories, which I decided to create a public version of for anyone to use and contribute under MIT with commercial restrictions license:

You need to have both projects to get this working. I recommend you check both README to begin with.

Right now, I am parking this project, but if you would like to learn more, want to evaluate this but need guidance, or are interested in actually using this in production, feel free to drop me an email at [email protected]! Who knows what happens next?

logo

ChatGPT Experiment follow-up

Did you try to look at my experiment lately? Did it timeout or gave you a bad gateway?


Bad Gateway

Well read-on if you want to know why!

Picture this: I’ve got myself a fancy development instance of a demo application built with ChatGPT. Oh, but hold on, it’s not hosted on some magical cloud server. Nope, it’s right there, in my basement, in my own home! I’ve been using some dynamic DNS from no-ip.com. Living on the edge, right?

Now, here’s where it gets interesting. I had the whole thing running on plain old HTTP. !!! I mean, sure, I had a big red disclaimer saying it wasn’t secure, but that just didn’t sit right with me. So, off I went on an adventure to explore the depths of NGINX. I mean, I kinda-sorta knew what it was, but not really. Time to level up!

So, being the curious soul that I am, I started experimenting. It’s not perfect yet, but guess what? I learned about Let’s Encrypt in the process and now I have my very own HTTPS and a valid certificate – still in the basement! Who’s insecure now? (BTW, huge shoutout to leangaurav on medium.com, the best tutorial on this topic out there!)

As if that was not enough, I decided – AT THE SAME TIME – to also scale up the landscape.

See, I’ve been running the whole stack on Docker containers! It’s like some virtual world inside my mini PCs. And speaking of PCs, my trusty Ryzen 5 5500U wasn’t cutting it anymore, so I upgraded to a Ryzen 7 5800H with a whopping 32GB of RAM. Time to unleash some serious power and handle that load like a boss!

Now, you might think that moving everything around would be a piece of cake with Docker, but oh boy, was I in for a ride! I dove headfirst into the rabbit hole of tutorials and documentation to figure it all out. Let me tell you, it was a wild journey, but I emerged smarter and wiser than ever before.

Now, I have a full stack that seems to somewhat work, even after reboot (haha!).

Let me me break down the whole landscape – at a very high level (in the coming days & weeks, if I still am feeling like it, I will detail each steps). The server is a Trigkey S5. I have the 32GB variant running on a 5800H, which went on sale for $400 CAD on Black Friday – quite a deal! From my research, best bang for the bucks. Mobile CPU, so energy-wise very good, but of course, don’t go expecting playing AAA games on this!

Concerning my environment, I use Visual Studio Code on Windows 11 with WSL enabled. I installed the Ubuntu WSL, just because that’s the one I am comfortable with. I have a Docker compose file that consist of:

  • NodeJs Backend for APIs, database connectivity, RBAC, etc.
  • ViteJs FrontEnd for well.. everything else you see as a user ๐Ÿ™‚
  • NGINX setup as a reverse proxy – this is what makes it work over HTTPS

This is what my compose file looks like:

version: '3'

services:

  demo-backend:
    container_name: demo-backend
    build:
      context: my-backend-app
    image: demo-backend:latest
    ports:
      - "5051:5050"
    environment:
      - MONGODB_URI=mongodb://mongodb
    networks:
      - app
    restart: always

  demo-frontend:
    container_name: demo-frontend
    build:
      context: vite-frontend
    image: demo-frontend:latest
    ports:
      - "3301:3301"
    networks:
      - app
    restart: always

  mongodb:
    image: mongo:latest
    container_name: "mongodb"
    ports:
    - 27017:27017
    networks:
      - app
    restart: always

networks:
  app:
    driver: bridge
    name: shared_network

This method is very convenient for running on my computer during the development process. I have a modified compose file without NGINX and different ports. This makes it easier for me to make changes and test them. When I’m satisfied, I switch to another compose file with the redirected ports in my router. I use “docker compose down” followed by “docker compose up” to update my app.

This is the NGINX compose file. The interesting part is I use the same network name here than in the previous compose file. When I do this, NGINX can communicate with the containers of the previous docker compose file. Why did I do this? Well, I have this demo project running, and I’m working on another project with different containers. With some configuration, I will be able to leverage my SSL certificates for both solution (or as many as I want), as well as one compose file per project. This will be very handy!

version: "3.3"

services:
  nginx:
    container_name: 'nginx-service'
    image: nginx-nginx:stable
    build:
      context: .
      dockerfile: docker/nginx.Dockerfile
    ports:
      - 3000:3000
      - 3030:3030
      - 5050:5050
    volumes:
      - ./config:/config
      - /etc/letsencrypt:/etc/letsencrypt:ro
      - /tmp/acme_challenge:/tmp/acme_challenge
    networks:
      - app
    restart: always

networks:
  app:
    driver: bridge
    name: shared_network

Of course, this is not like a cloud provider; my small PC can die, there is no redundancy; but for development and demo purposes? Quite efficient (and cheap)!

As of last night, my stuff runs on HTTPS, and should be a bit more reliable moving forward. The great part about this whole experiment is how much I learned in the process! Yes, it started from ChatGPT. But you know what? I never learned so much so fast. It was very well worth it.

I will not state I am an expert in all of this, but I feel I know now a lot more about:

  • ChatGPT I learned some tricks along the way on how to ask and what to ask, as well as how to get out of infinite loops.
  • NodeJS and Express. I knew about it, but now I understand better the middleware concepts and connectivity. I have built some cool APIs
  • ViteJs This is quite the boilerplate to get a web app up and running
  • Expo and React-Native. This is a parallel project, but I built some nice stuff I will eventually share here. If you want to build Android and IOS apps using React-Native, this framework works great. Learn more on Expo.dev
  • GitLab. I tried this for the CI/CD capabilities and workflows… Oh my! With Expo, this came in handy!! Push a commit, merge, build and deploy through EAS! (On the flip side, I reached the limits of free quite fast… I need to decide what I’ll be doing moving forward) On top of it, I was able to store my containers on their registries, making it even more practical for teamwork!
  • Nginx. The only thing I knew before was : it exists and has to do with web servers. Now I know how to use as a reverse proxy and I am starting to feel that I will use it even more in the future.
  • Docker & Containerization. Also another one of these “I kind of know what it is”.. Now I have played with containers, docker compose, and I am only starting to grasp the power of it.
  • Let’s Encrypt. I thought I understood HTTPS. I am still no expert, but now I understand a lot more how this works, and why it works.
  • Certbot this is the little magic mouse behind the whole HTTPS process. Check it out!
  • MongoDb. I played with some NoSQL in the past. But now… Oh now. I love it. I am thinking I prefer this to traditional SQL databases. Just because.

A final note on ChatGPT (since this is where it all started):

The free version of this powerful AI is outdated (I don’t want to pay for this – not yet). This resulted in many frustrations – directives that wouldn’t work. I had to revert back to Googling what I was looking for. Turns out that although ChatGPT will often cut the time down by quite a margin, the last stretch is yours. It is not yet at the point where it can replace someone.

But it can help efficiency.

A lot.