In a peer-to-peer P2P network, you as the peer that is streaming the content is also uploading to a other peer , and that peer is in turn uploading the stream to another peer, and so on. Thus reducing the bandwidth strain on the original uploader in this case the stream source , and allowing streams to buffer fast and stay mostly glitch free all over the world.
TVAnts www. TVAnts also seems to have higher quality channels, however they take longer to buffer and sometimes may have more glitches if you have slow bandwidth. PPStream can broadcast TV programs stably and smoothly to broadband users.
Compared to traditional stream media, PPStream adopts p2p - streaming technology and supports full-scale visit with tens of thousands of users online.
Now navigate to the directory you unzipped it to and double click the setup. Go through the setup to install, Use the default buttons for the easiest install. Once installed there should be a TVUplayer shortcut on your desktop, double click it, a green orb with TVU inside of it should show up as an icon in the bottom left hand corner of your screen. After a little while a screen will popup with a player window, and a channel list.
If no Channels show up you may need to allow tvuplayer access through your firewall. Also you may need to open ports. Compared to traditional stream media, PPStream adopts P2P-streaming technology and supports high-volume traffic with tens of thousands of users online at once. Latest updates. View more ». Download at PPStream, Inc. Secure and free downloads checked by UpdateStar. Stay up-to-date with UpdateStar freeware.
Latest News. Related searches. Most popular downloads. Hence a chunk contains video streaming lag effect among end users nearby. Finally, we investigate the of about 21 seconds for the kbps Olympic channels. Each data scheduling in Olympic channels, and make comparisons buffer-map message includes a chunk-id and a bit string with that in an ordinary channel.
We demonstrate that better of zeros and ones indicating which sub-chunks are available. Therefore, we unveil an overhead-quality tradeoff corresponding sub-chunk in this chunk. To the best of our knowledge, insights gained in our study are unique III. Our study aids the Based on the knowledge of how P2P IPTV systems gener- understanding of significant performance and design issues of ally operate and our understanding on the proprietary protocol PPStream, and is also useful for optimizing, modeling and the details of PPStream, we developed our dedicated measurement future design of P2P IPTV systems.
We then messages, buffer-map messages and data-request messages. Since only the most important control traffic is captured, the Finally, Section V offers the conclusion of the paper and our computation complexity of the data analysis and the storage future work. Therefore, we believe our measurement tool is more suitable for long-term II.
The whole of network connections, we deployed our PPS-Sniffer in procedure of how a P2P streaming peer joins the system is five controlled nodes: three were connected to the campus described as follows. When an end-user launches the software network with Mbps Ethernet access; and the other two client, it connects to the channel list server to update the were connected to residential networks through ADSL access. Once an end user selects a channel, the peer node The five nodes were all running NTP [17] to assure time joins the corresponding overlay of the selected channel.
It first synchronization. Our measurement started from the opening requests the peer list servers to retrieve an initial list of peers ceremony of the 29th Olympic Games on August 8th, that are in the same overlay. The peer node then communicates to the closing ceremony on August 24th, We collected with the peers in the initial list to obtain additional peers in a We gossip manner. Peers are identified by their IP addresses and have made all of the collected data downloadable, and a more port numbers, and they are returned in messages which we call detailed description on the data format and the dataset can be peer-list messages.
We limit node attempts to connect to these peers to download video the scope of our analysis to several typical sports events that data, which are divided into chunks. To this end, peers send to are representative of our data set. Table I provides an overview each other buffer-map messages, which denote what chunks a of our measurement settings. Four of our controlled nodes peer has currently buffered and are available for sharing.
To download the required chunks. Then it sends out data-request make a comparison, meanwhile, we had the other one node, messages to request the specific chunk from its peers. After CampusC, watching an ordinary channel of PPStream, namely, accumulating enough chunks of video data in the buffer for First Finance.
Playback Delay Due to the best-effort nature of the underlying Internet and 0 the deployment of the buffering mechanisms in P2P streaming Time sec systems, the progress of video playback on some peers may be minutes behind other peers and even longer behind that of Fig. The playback delay differences the program source.
We refer to this phenomenon as playback delay, which affects the viewing experiences of end users. We therefore define the playback delay time PDT 0. Since 0. We can see that the playback delay effect was notable - CampusA was about seconds behind Fig.
As measured by to examine whether and to what extent the overlay topology previous study in [14], the average playback delay of PPStream of PPStream includes peers of local cluster to exchange data. However, the during a same time period. Let P x PPStream. Moreover, the overlay radius increases with the overlay size, because We observe, interestingly, that the partner overlap of Cam- it is bounded by O log N , as modeled in [4], where N is the pusA and CampusB is zero during the time period as in Figure overlay size.
Examination and CampusB, which were in the same LAN, since a large of other traces of these two nodes is plotted in Figure 2. Data Scheduling ChinaNetwork 0. In this subsection, we analyze the data 0. This sub- chunk is the basic unit for data scheduling, which is known as 0. We refer to this smaller unit as small-chunk. It is the scheduling unit in Olympic channels Fig. It seems that the two nodes would like to potential suppliers of a scheduling unit.
Because it becomes select partners independently and the local peers are not aware easier for a node to find an appropriate parent for downloading of each other. This suggests that the peer selection mechanism data if a smaller scheduling unit is used, the utilization of of PPStream does not exploit peers of local clusters. PPStream exploits locality information while selecting part- To validate and evaluate this claim, we investigate the ners.
Therefore, we further explore the structure of the P x following two metrics: for our controlled nodes, particularly the ISP distribution. For the purpose in a sub-chunk. This term indicates that, to obtain the same of comparison, Figure 3 also reports the ISP distribution of amount of data, utilizing smaller scheduling units in Olympic the P x of ResidenceA which is in ChinaNetcom during channels explores more potential suppliers, as compared with the same period.
It is shown that, indeed, partners from ordinary channels. ChinaNetcom and ChinaTelecom are in the majority for all 2 data miss ratio: It is defined as the number of sub-chunks three controlled nodes, given that they are the largest ISPs that are not represented in the buffer-map messages over the in China, whereas CSTNET is much smaller.
However, we total number of sub-chunks. The data chunks, which are not can still observe the differences between the composition of represented in the buffer-map messages, are probably missing partners of ResidenceA and that of CampusA or CampusB. In addition, we August 13th, which held totally 27, sub-chunks.
0コメント