18TB effective does still just about put you in the big boy world of backups.
In the real world I would probably ask what the data is (database, never you mind as it is just a binary blob, virtual machines, general file store*, actual disc partitions/PC clones....), what sort of downtimes you can handle (pumping 18 TB over a 100mbps network port would take a little while after all, even with gig it is not necessarily an overnight job -- maybe a weekend if everything goes smoothly and 10Gb stuff is available but not exactly cheap) and depending upon the data then what kind of restore you want (full partition map, data on the partitions, just need my files...)? Equally what sort of backup are we looking at (hot, cold, offsite/onsite, what sort of restore timeframes, what sort of backup schedule will they have....)? Has someone read a whitepaper/attended a conference and spoken nasty words like high availability (HA) or fault tolerance (FT)? Can you deal with a certain amount of loss in a given situation (either changes since last back, someone wanting to rewind the clock even further, an error being duplicated into your backups...)?
*story from a friend but he had a image library of considerable size itself with various size thumbnails/scaled versions for each image, made for quite a few files in the end and caused some things some trouble. Personally I would have considered regenerating the thumbnails but that might have been effort at some level.
If it is just files or effectively reduced to files then never underestimate good old FTP with a proper client and server. Similarly rsync is not to be sniffed at.