Quick Contact


    It helps in performing action on the chunks rather than loading the whole data into RAM and the way platform decides how many chunks it should carry. Similarly a chunk is stored in buffer decides maximum size of chunk it can have.
    Let’s take a scenario-:

    Size of the big data is large as compared to client size. This is kind of like the size of the connection the ability for the connection to receive input so you can’t just like send all of this big data in at once. And you all know about this because you have studied how HTTP works and how its tcp/ip works. So what you have to do is, you have to break that up into chunks. So way to implement stream is to as you go, load in chunks at a time.


    So each time some amount of data is piped in from the server, it gets stored in what’s called a buffer which gets loaded up with these chunks. With HTTP stream is that like that buffer is then sent over as one whole thing and may be resembled on the client side sent as chunks packets from all different places. Reassemble them in process as a buffer. In node actually going to deal with individual chunks in our streams so streams are implemented in node as basically instances of event emitters so they’re able to just kind of do stuff when they receive data or when data is finished being received so there’s data event for instance in our client is that buffer coming in with some of that big data.

    So there are three main kinds of Stream types and then

    One kind of special case the easiest is the readable stream. For example in node there is FS module for access to the file system and simply create a read stream which gives you access to the data that you can stream in from that file in the file system. The most basic stream that you probably deal with this “process.stdin” for when you are running a node process on the server, you can type into it and that’s your standard input.


    There are writeable streams the analog to those two are create write stream for saving a file into file system and then standard out for writing to the standard out of whatever screen you are working with.

    – DUPLEX-

    You also duplex streams which are essentially just a readable stream and a writeable stream combined into one single sort of object that you can talk about so there each one has its own buffer, it’s really just two streams combined into one there’s no magic duplex. For example there is a TCP socket.


    The special case of duplex stream which is called transform stream which is essentially a duplex stream where readable stream can go out some sort of transform process happens and then the writeable stream is readable by that same client so it’s kind of all happening within the same client so in node one of the main libraries is gZip for compressing and decompressing with gZip .

    OPERATIONS/EVENTS of different Streams
    1. Readable:

      -‘data’, ’close’, ‘end’, ‘error’, ‘.pipe(destination)’, .setEncoding(), etc.

    2. Writeable:

      -drain, ‘pipe’, ‘error’, write (), end (), etc.

    – Some Streams are duplex and combine readable and writeable stream.

    – Transform Streams- in this you have readable or writeable side even though it’s a duplex stream, you only have access to one side of it so you would write into it then some sort of transform function would happen and then you would read.

    Uses of Streams-:
    1. It allows for data processing chunk by chunk or line by line.
    2. It is memory efficient, as it donot need to save entire large file/buffer to memory.
    3. It allows more functional approach, small modules can be chained.
    1. Implementation of echo :
    2. Process.stdin.pipe(process.stdout)
    3. Process.stdin (readable) piped into process.stdout (writeable)
    4. Using transform streams: through2 module.

    Copyright 1999- Ducat Creative, All rights reserved.