Its impossible, it will slow down. Hence why uTorrent goes into Disk Overload, thats basically it downloading faster than the HDD write speed, and when it does hit Disk Overload it slow right down to basically nothing. Thats so it can write what is waiting to be written
Then the data is written to RAM and put in a queue.Thanks =D
With google pushing towards 1gigbit/sec speed, I realized that's faster than some standard mechanical drives. But thinking about it, data is being sent to you at a fast rate, there must been some probability of accidentally pushing too much.
I thought that he meant a theorical scenario where there aren't any bottlenecks during the entire track. Also, I'm sure that a modern SB (now called PCH) with a PCIe GB Ethernet wouldn't have any problems in doing all that on a DMI 2.0 bus.that's impossible, the ethernet and the PCI transfer speed are directed at the southbridge after the memory recycle meaning that the HDD gets a priority in the hardware level for file transfer speed, a filepacket can't be faster to transfer from PCIethernet to southbridge and to HDD than HDD recycle to itself, it's like assuming it's possible to drive from point city A (far west) to city B (far east) faster than city B to drive and return to itself from the middle city (city C for example)...
edit: just read your post...
I will put it simple, your motherboard brand would be a bloody fool to let something this idiotic happen, there's an HDD controller and there's a network card controller, one would be a fool to let the network technology exceed by so much the HDD technology on the same motherboard, what does it have to do with? even if your ISP provided you petabyte per second of bandwith, the so much you can get is limited by your network card, your network card transfer speed can't be of a faster technology then HDD's if the motherboard brand is smarter than a dunk, unless of course you are going to use an IDE or previous technology of the HDD itself, which even than it limits the sourthbridge and the northbridge together and won't let you use the full bandwith that even the network card allows due to buffer being charged, in that case your PC will end up slowing as hell while writing that data and stopping all network communication (meaning it will only write the data it has to write + saved data that it couldn't write in time and won't allow more data to be downloaded until it's done)
if we assume out bottlenecks it'll be the same question as "what happens if my gas in my car won't reach the carburetor [a pot in the car that mixes gas and air for the engine] in time for the mixture?" the answer will be simple : it won't reach the engine in time, the engine won't manage to ignite it and pure energy won't be made, meaning your engine will lack power for a second, that's where the alternator kicks in for a second try, if the second recycle won't be the in time your car's turned off because the engine can't do much more which leaves your car immobile, but the car is DESIGNED so the carburetor will get both gas and air in time, any malfunction there is an ISSUE in car itselfI thought that he meant a theorical scenario where there aren't any bottlenecks during the entire track. Also, I'm sure that a modern SB (now called PCH) with a PCIe GB Ethernet wouldn't have any problems in doing all that on a DMI 2.0 bus.that's impossible, the ethernet and the PCI transfer speed are directed at the southbridge after the memory recycle meaning that the HDD gets a priority in the hardware level for file transfer speed, a filepacket can't be faster to transfer from PCIethernet to southbridge and to HDD than HDD recycle to itself, it's like assuming it's possible to drive from point city A (far west) to city B (far east) faster than city B to drive and return to itself from the middle city (city C for example)...
edit: just read your post...
I will put it simple, your motherboard brand would be a bloody fool to let something this idiotic happen, there's an HDD controller and there's a network card controller, one would be a fool to let the network technology exceed by so much the HDD technology on the same motherboard, what does it have to do with? even if your ISP provided you petabyte per second of bandwith, the so much you can get is limited by your network card, your network card transfer speed can't be of a faster technology then HDD's if the motherboard brand is smarter than a dunk, unless of course you are going to use an IDE or previous technology of the HDD itself, which even than it limits the sourthbridge and the northbridge together and won't let you use the full bandwith that even the network card allows due to buffer being charged, in that case your PC will end up slowing as hell while writing that data and stopping all network communication (meaning it will only write the data it has to write + saved data that it couldn't write in time and won't allow more data to be downloaded until it's done)
LOL, you edited your post while I was writing. Yes, indeed the buffer full scenario was what I had written a few posts earlier.
Indeed, but don't think of 20GBs, 250MB/s would be enough to saturate the drive's writing speed. 20GB is overkillDMI is a highway between NB and SB, that's it, until the network bus reachs either it needs to wait for the next recycle or to time itself to take it's recycle faster than the HDD unit, however until it reachs the NB for transfer speed use, it needs to wait for the memory to complete it's cycle which is timed for the processor mostly to steal some of it's L3 cache, meaning it will be a poor driver coding to overbuff the memory from the DMI together with L3, it's creating a mess out of a problem, of course you can direct the network bus directly to the SB - you than you have no winning here over DMI II, even if it offers double lane similar to PCI-E to get 40gbs bandwith, also you have to remember that DMI II is a highway designed per timing, meaning that it can't be made 20GB just for network speed, it gotta transfer EVERYTHING it got on it's trucks, even the buffer
the PCH and it's supportive mobo's and CPU's are simply cutting the NB and giving the roles to the CPU (memory controller etc) and SB (which is now called PCH, but still it's a southbridge buffed with extra abilities), it's still timed and recycles with timing, it's just done on the processor and buffed to the memory (and incase it's flagged, the L1 cache of the processor), that's why DMI was done for the PCH era mobos (Intel), to communicate with the virtual northbridge that the processor has (memory controller - buffing) and the rest of the packets that needs to be saved on the HDD are sent from the PCH to the SATA controller\IDE controller depended on who's your master HDD controller (even if the slave is aimed as target, it gotta pass from master), it's enough recycles and shit to time the network packets to be slower than DATA to DATA cables just to transfer from controller\port to another controller\port, especially if it's itself, though if you copy data from SSD drive to SATA II drive, the results are obvious, also only 1 lane is preserved for buffing, that's to prevent freezing, it's a command processed due to the SSD coding and it's done in order to prevent buffer overload done by the SSD because they are a zillion-times faster than SATA II\III HDD (talking about the mechanical ones here, obviously), it's also why Intel invented an emulator between two ports to combine their transfer speed with simple priority remappingIndeed, but don't think of 20GBs, 250MB/s would be enough to saturate the drive's writing speed. 20GB is overkillDMI is a highway between NB and SB, that's it, until the network bus reachs either it needs to wait for the next recycle or to time itself to take it's recycle faster than the HDD unit, however until it reachs the NB for transfer speed use, it needs to wait for the memory to complete it's cycle which is timed for the processor mostly to steal some of it's L3 cache, meaning it will be a poor driver coding to overbuff the memory from the DMI together with L3, it's creating a mess out of a problem, of course you can direct the network bus directly to the SB - you than you have no winning here over DMI II, even if it offers double lane similar to PCI-E to get 40gbs bandwith, also you have to remember that DMI II is a highway designed per timing, meaning that it can't be made 20GB just for network speed, it gotta transfer EVERYTHING it got on it's trucks, even the buffer
Also, NB and SB are now integrated on a single die (PCH), I guess that access times between them are now pratically non-existant.