Thursday, September 3, 2009

Socket Programming - ASYNCHRONOUS

What is Asynchronous Socket Programming?
  • "Event-driven" programming or "select()" based multiplexing
  • A concept of "handling" multiple connections in "single thread/process"
When do we need this?
  • Assume you have to write a server that will be "hit" by "n" number of clients with some requests
  • You would prefer any of the follow
  • synchronous: you handle one request at a time, each in turn.
    pros: simple
    cons: any one request can hold up all the other requests
  • fork: you start a new process to handle each request.
    pros: easy
    cons: does not scale well, hundreds of connections means hundreds of processes.
    fork() is the Unix programmer's hammer. Because it's available, every problem looks like a nail. It's usually overkill
  • threads: start a new thread to handle each request.
    pros: easy, and kinder to the kernel than using fork, since threads usually have much less overhead
    cons: your machine may not have threads, and threaded programming can get very complicated very fast, with worries about controlling access to shared resources.
  • The best solution would be "select" system call.
How "select" works?
  • The normal heirarchy of socket programming calls in the server side would be "socket()->bind()->listen()->accept()".
  • If you notice, here "accept" is the blocking call. In order to avoid this blocking stuff, we use "select" which will "run" through all the connected sockets to check if there are any requests pending for "reading, writing or error conditions".
  • If any requests pending, "select" will return the number of requests pending to be processed.
  • Hence the heirarchy would be "socket()-> bind()->listen()->select()->accept()"
  • But here the disadvantage is that, we need to iterate through all the "connections" (descriptors) connected to check "Are you the one with pending requests?" questions.

No comments:

Post a Comment