It uses the pipelining and persistent connections between client and server by which there was a bottle neck of multiple data packets to be sent to same destination which is called as HOL head-of-line, problem.
It uses binary framing layer by which the streaming of the data to be sent is prioritized and also multiplexed, which allows parallel communication and binary frames makes redundant transfer not to happen.
2)BUFFERING AND OVERFLOW
In this we have a client buffer to store data while receiving from the server, if the buffer is full the client is able to intimate the server about the lack of buffer area and stop it from sending more data.
Here we use server push by which the data are pushed in to the client cache and there is no need to get acknowledgment from the client side or any information about window size.
But if the server pushes more data into the client data overflow occurs and data are lost.
3)SERVER TO PREDICT THE REQUEST OF CLIENT
Resource inlining is used here where the server predicts the request and sends the data without asking here the major drawback occurs is that often unwanted resources are first sent than the document itself where by creating the process to be delayed and the client gets confused about the resource and document.
It uses server push protocol by which we have server informing the client that it is going to push some data and sends it the header (PUSH_PROMISE), the client on the other side checks if it is already delivered by it and sends an response RST_RESPONSE to stop the unwanted push but accepts it when it is not pushed beforehand.
Thus when we observe the performance of the newer versions are build to overcome the disadvantages faced by the older ones and to have good web experience, HTTP/3.0 is developed and awaited to be used soon.