Implement queue inside an API Gateway?

Recently we have this client who asks for a API Gateway solution, in this case it’s Kong. Currently, they have 10 services (200 APIs) that are running on really legacy stuff (built with C++ and Fortran). It’s a pain for third party to integrate, hence they ask for API Gateway as a front to handle authentication, rate limiting, and etc.

However, there’s one thing that I can’t grasp, personally I don’t think they do too. Because of the low QPS, they specifically asked for a Queue server that sits between the API Gateway and rest of the services. Reason being we should hold any incoming services in the queue, and release them to the legacy services once they are done with the previous requests.

I think the intention is good, pausing incoming requests to not further burden the legacy services. But given the nature of Queue, doesn’t it mean existing frontend application will need to change the way they handle request/response too? To event based, callback style or socket notification.

And also putting a queue in the middle is essentially adding unnecessary complexity to the entire platform right? Personally, I think what they should do is to rework the existing services, identify the bottleneck and make them scalable.

I’m very confused with this client, please let me know what do you all think. Thanks in advance!

submitted by /u/rexlow0823
[link] [comments]

from Software Development – methodologies, techniques, and tools. Covering Agile, RUP, Waterfall + more! https://ift.tt/6fd37Gp

Leave a comment

Design a site like this with WordPress.com
Get started
search previous next tag category expand menu location phone mail time cart zoom edit close