This topic contains 108 replies, has 0 voices, and was last updated by  Deathbal 4 years, 11 months ago.

  • Author
    Posts
  • #64053

    Laptops Daddy
    Participant

    i think psu specs are usually made up nonsense. all the numbers are meaningless if they dont provide crossloading info

    #64054

    QuadShotz
    Participant

    To reply to both….

    I’m on a fixed income, so I get what is new now in about 5 years. Hell, I played this game in 2004 on a K6-2 450…lol!

    I wouldn’t spend even $35 on a PSU if it wasn’t one of the best made…at that time.
    Try finding one of these on Ebay. Not many there at all because people keep them. 😉

    Only reason I can get this is a tech forum buddy who’d got it from a friend, he’d got it in some RMA deal and never used it. So, it’s basically new old stock.

    PC Power & Cooling Turbo-Cool 510-SLI 510W
    http://www.newegg.com/Product/Product.aspx?Item=N82E16817703001

    Here are the real numbers.
    http://www.pcstats.com/articleview.cfm?articleID=1816

    I’m not a big gamer, just mainly video/photoshop. Nor will I probably ever be able to afford anything this PSU couldn’t power. Hehe….plus, my POS 114yo apt would catch on fire if I tried to power tooo much more. 8)

    #64055

    Laptops Daddy
    Participant

    i dont think there’s a PC out there that really needs an 800W PSU.

    at the wall, in the real world, anything close to half that would be absurd. crazy. rendering some stuff or gaming – that’d be like forgetting youd left the electric heater on in the spare room, just… permanently. for one PC. what if you had 3?

    no, 250 at the plug is about right peak for the high-end, and half that for most people. i dont think that’s changed for 2012. if anything, the requirements should be going down, because people are more conscious of the e (energy, environment, efficiency, etc) and the heat/noise. any more than that 200ish max total, and the components wouldnt be viable/marketable. maybe 350 with triple SLI and an overspec CPU. the g card companies must make a fortune from endorsing required better special power supplies, though. i say it’s a gimmick.

    peanut, id love to see a reading at the wall for yours, running a benchmark/stress test or something.

    my main is crashing a lot lately and im starting to think it could be the PSU. i forget what it’s using. some cooler master effort, i think. i doubt it’s lack of rated output. im pretty sure any true 400 would power anything there is – it’s all about consistency and stable volts numbers.

    #64056

    PeanutsRevenge
    Participant

    @laptops Daddy wrote:

    i dont think there’s a PC out there that really needs an 800W PSU.

    at the wall, in the real world, anything close to half that would be absurd. crazy. rendering some stuff or gaming – that’d be like forgetting youd left the electric heater on in the spare room, just… permanently. for one PC. what if you had 3?

    no, 250 at the plug is about right peak for the high-end, and half that for most people. i dont think that’s changed for 2012. if anything, the requirements should be going down, because people are more conscious of the e (energy, environment, efficiency, etc) and the heat/noise. any more than that 200ish max total, and the components wouldnt be viable/marketable. maybe 350 with triple SLI and an overspec CPU. the g card companies must make a fortune from endorsing required better special power supplies, though. i say it’s a gimmick.

    peanut, id love to see a reading at the wall for yours, running a benchmark/stress test or something.

    my main is crashing a lot lately and im starting to think it could be the PSU. i forget what it’s using. some cooler master effort, i think. i doubt it’s lack of rated output. im pretty sure any true 400 would power anything there is – it’s all about consistency and stable volts numbers.

    Well, you spurred me into doing something I’ve been meaning to do these past few weeks and see how much power my rig does actually pull.

    I’ve got a 400W (650VA) APC UPS (not PSU as put previously) here for the server I’ve been working on a while which gives load as a percentage.

    I’ve plugged the power into my system and left the USB in the server (as the software’s already setup) and run a few quick tests. Thankfully, Laptop’s wrong.

    Firstly, my system specs (nothing fancy)

    PSU: 850W GOLD (90%+ throughout except below 20% load).
    CPU: Intel pentium E5400 currently clocked to 3.4Ghz @ 1.232V (according to CPU-Z) (speedstep enabled) 65W TDP
    GPUs: 2 x NVIDIA GTS 250 1GB
    HDD: 1 x NEW Westy Blue 500GB
    Mobo: XFX 750i

    System load:
    CPU: Prime95 v25.6 (2 threads, In-Place large FFTs (for max heat and power)
    GPUs: Furmark v 1.9.0 @ 1680×1050, 8x MSAA with and without SLI

    Results:

    Product – – – – – – – – – – – – – – – – -: Load (%) – Power into PSU – (Power out of PSU)
    Idle (windows desktop, CPU @ a steady 0-3% : 50% – 200W – (180W)
    1 x GPU loaded with Furmark (SLI disabled): 71% – 284W – (255W)
    2 x GPUs loaded with Furmark (SLI Enabled): 90% – 360W – (324W)
    CPU loaded with prime 95: 59% – 236W – (212W)

    For power out of PSU, I’ve assumed a 90% efficiency throughout, this is unlikely the case, however I have no way of measuring it unless I swap PSUs (I have another gold PSU and a Bronze here)

    I did not run the system @ it’s max load as the power would be greater than the PSU is rated for, additional loads would have been a HDD stress test and a RAM stress, however they wouldn’t add much power.

    It’s worth noting that @ 400W this PSU is pretty much in its sweet spot, almost all PSUs run at their most efficient around 50% load, although they’re awful below 10% load.

    ________________________________________________________________________________

    Lappy is quite right about the standard, mainstream PC not needing over 200W from the PSU, however they’re often shipped with cheap, crap PSUs which can’t put out the power they claim and/or only a small amount of power on the VITAL 12V rail. They’re also hiddeously inefficient.

    If you look at my results, using a high end, gold rated efficiency PSU, upto 40W are still being wasted by the PSU as heat.
    If an 80% PSU (simple 80 Plus certified) was used that would be EIGHTY watts of power disappearing when the system’s running.

    I can already hear Laptop crying out, that’s still not over 400W, well Mr Broody-face, current TOP end graphics cards are now shipping with little switches to circumvent the PCI-Express power limit of 300W. The cards are down clocked when set to 300W and clocked to their factory max when switched to 350W (I think the standard is 350 atm prob increase soon).
    The point of the switch is that the customer / OEM is taking responsability for supplying sufficient power to the card.

    My two cards are pretty low end (pretty much identical to my old 8800 and 9800 GT cards) as can be seen from their low power draw.

    So:

    A run of the mill system that most people have for checking email, watching some YouPorn, online banking probably draws 200-250W MAXIMUM

    A run of the mill gaming rig for someone that doesn’t spend much on their rig will prob draw 250-300W.

    A semi serious gaming rig costing £1-1.2k would prob draw 400-500W

    A serious gaming rig costing £2-2.5k would prob draw 500-700W

    An obscene rig owned by someone that has more money than skill costing £2500 – £5000 would easily require 800-1500W of power.

    The reason for me getting this PSU when I prob won’t pull over 600W?
    The PSU is most efficient around 50% load, for this unit, that’s 425W!

    I’ll shutdown these system again shortly and get a reading for the server (it’s essentially a mainstream desktop PC), from memory is pulls about 100W @ idle.

    P.S sorry it’s so long and messy, I’ll see about tidying it up shortly aswell.

    EDIT:

    The server’s pulling 25% of 400W @ idle, 41% with the CPU stressed (CPU = AMD Phenom II X4 955 3.2GHz) running from a 600W Coolermaster Gold PSU).

    #64057

    PeanutsRevenge
    Participant

    I see you there Lappy.

    Should you wish to discuss the matter in a way that would not be appropriate for this thread (the stupid people will get confuzzled, bless em), I’m in the scorched IRC channel @ irc.blitzed.org #scorched if you’re an IRC user.

    #64058

    Laptops Daddy
    Participant

    I see you there Lappy.

    Should you wish to discuss the matter in a way that would not be appropriate for this thread (the stupid people will get confuzzled, bless em), I’m in the scorched IRC channel @ irc.blitzed.org #scorched if you’re an IRC user.

    ha ha. ok. yeah – i’ll go on irc.

    id already written this when i saw, so, i was saying:

    that’s an interesting post.

    I’ve got a 400W (650VA) APC PSU here for the server I’ve been working on a while which gives load as a percentage.

    i dont want to be an ass, peanut (much : ). you know i love you really, but i think that’s what they call false attribution. it’s a fallacy. you’re assuming that ‘400W’ psu actually draws 400W at 100%.

    there’s no substitute for a meter reading at the wall.

    sounds like a cool psu that gives percentages – ive never come across one.

    anyway, im not saying the percentages it’s giving you are inaccurate, necessarily, but, that either way, the numbers are meaningless without calibrating the power consumption at the wall to see if it’s really using 400W at 100%. id bet money that it isnt.

    A run of the mill system that most people have for checking email, watching some YouPorn, online banking probably draws 200-250W MAXIMUM

    it’s more like half that from my experience, unless youre including the monitor.

    #64059

    PeanutsRevenge
    Participant

    Sorry, 400W APC UPS

    stupid UPS being the same letters as PSU just in a different order when I was typing PSU a lot :@

    I’ll update the post.

    #64060

    Laptops Daddy
    Participant

    ohhh, ok. yeah – that makes more sense. thanks for peeing on my fire.

    i just did a little test of my own. forgive the sketchy pic – it’s under a desk and the flash made the numbers invisible:
    [attachment=0:2yvv4seu]reading.jpg[/attachment:2yvv4seu]
    that’s a 6 core 1055t with a single 8800GTS, all cores maxed doing some rendering, running furmark on top for good measure. that spec’s what id call ‘high-end gaming rig’ – which, is partly delusion, defiance and wishful thinking. it wouldnt touch the latest games, but the g card was one of those ‘damn the environment, i want 10 more fps’ models when it was new, and prob uses as much as a similarly excessive modern equiv energy wise.

    i dunno about the labels – i suppose we all have a different idea of what’s normal. id call anything sli ‘enthusiast/gimmick level’, and triple sli is just – stupid excess for people with hardware fetishes and moar computers than a PC repair shop : )

    i stand by my numbers. 250 at the wall is about right for high-end, half that for the midrange. i hope that doesnt change too much.

    ps:
    very good point about efficiency under different load percentages. temperature probably makes a difference too.

    #64061

    PeanutsRevenge
    Participant

    LOL, I hope it’s not the 320MB or 640MB version as they were the ugly ducklings of the breed then.
    Nvidia messed up the G80 chip is a big way, the G92 GTS with 512MB was far superior. My 8800GT with 512MB would beat an 8800GTS 640MB 😀

    As for triple SLI being stupid, it’s really not for those of us with multiple monitors.
    I’m hoping to sometime soon get a couple more 22″ 1680×1050 screens, which will allow me to run a resolution of 5040×1050. That’s A LOT of pixels to be pushing around the place and that’s on small screens.
    If I had the cash, I’d get 3 x 30″ 2560×1600 screens for 7680 x 1600 resolution. plus another 3 22″ers.

    Clearly you’ve not tried running the latest games at pretty detail levels (not max, just normal +)

    #64062

    Laptops Daddy
    Participant

    ive had a few sli systems over the years. 6800GT, 7900GT, 8800GTS. it’s possible it’s finally come of age or that my set-ups were flawed, but ive seen enough. i won’t be having another.

    sli with 2 is stupid, like a car with two engines. it might go up a nice steep hill if it has big enough wheels and a nice strong chassis, but youll need an engineer to come along with, for when it breaks down at each bump. and it’s def not for the motorway. i dont know what 3 is. crazy inventor guy, driving down the road on nuclear steam power with sails made from the NYYYlon.

    i get you, for multiple displays, it could be worthwhile. spanning screens does nothing for me. id rather have each on a dedicated pc for the same money.

    #64063

    PeanutsRevenge
    Participant

    I don’t span for normal use, it would be for gaming, mostly driving/racing games so that I can see cars to the side of me on the side screens and for RTS games so the radar/map has a whole screen to itself (spring does it).

    As for performance, it’s a good boost, as I’ve said recently, somewhere between 70-95% increase with second card, only about 30-70% with third.

    However, you example, needing an engineer around isn’t far wrong, it’s still annoying to use with multiple monitors, partly because windows is so dumb.
    With mine, when I ‘activate all displays’ (3), it puts my middle monitor on the right, left screen in the middle and right screen on the right.

    When enabling SLI, it disables the correct monitor (right monitor, but that’s because it disables the monitors attached to the second card), but makes the right screen the primary…

    It’s all very annoying, but I am using older cards, where the new ones have had effort put in to resolve the multi monitor problems in vista/7.

    BTW, what you were saying to db the other day was completely wrong, 1 x very good card < 2 x mid range cards.

    However it does add complexity, doubles the likelyhood of failure (although if a card fails, there’s a backup already in place) and requires power / space.

    But I need multi monitors, so I’m sticking with multiple cards, even if they’re not very often SLI’d.

    #64064

    Laptops Daddy
    Participant

    …what you were saying to db the other day was completely wrong, 1 x very good card < 2 x mid range cards.

    that wasnt the conversation we had, the way i remember it. i said you can offload antialiasing to a second card, which you said wasnt right. youd have to clarify ‘<'. one high-end vs 2 mid-range. i say the single card will be less likely to go wrong/crash/break and more likely to give better frame rates across the board. im assuming similar total cost, so, say one £120 card vs two £60.

    sli performance depends a lot on the drivers, the game youre playing, the settings and what you’re asking the second card to do. i mean, i dont need to tell you, clearly – youre probably sat in front of one. i know some people imagine that you can just plug in a second card, and youll suddenly have a 90% increase in performance at the same settings. it’s never that simple. identical frame rates and a system crash are probably just as likely in the scheme of things : )

    im surprised by your numbers. id have thought 30-70% for a second.

    i hate windows 7

    #64065

    PeanutsRevenge
    Participant

    Oh, I didn’t realise you were talking low-mid range card versus 2 budget cards.

    Not sure why anyone would want to do such a thing, given the additional cost of SLI/crossfire motherboards, would be better (for multi monitors in a desktop enviroment) to get a mobo with onboard gfx and a £60 gfx card.
    Also, you might struggle to find cards that can be SLI’d / crossfired @ that price point.

    I was talking about 2 x £200 vs 1 £400 card (2xNvidia 560Ti vs Nvidia 580)

    Another reason I like multiple cards (which I’d previously forgotten) is that it’s quieter to have 2 cards than one because the coolers are doubled.

    There is another little annoyance I’ve read about lately (on Toms), microstuttering.

    For more info, google it as I can’t remember well enough to explain and not sure I understood it well enough to put into english anyway.

    Basically, with multiple cards, the frame rates jump around more, for brief periods (talking less than a second) they can drop below that of a single card.
    It’s not something that most people apparently notice, but it’s something that’s being talked about atm.

    However, yea, SLI/Crossfire performance has increased a lot lately, partly because ATI and Nvidia are at war over top dog title.

    SLI/Crossfire should also make even more difference with 3D aswell, especially quad SLI I reckon, 1 card does left eye, odd frame, the next does the right eye for odd frame, the next card does the left eye even frame and 4th right eye even frame!

    Not a fan of 3D tho, prefer more monitors to 2 monitors in 1!!!

    #64066

    Anonymous

    That’s a familiar discussion, there was a thread just like that today on lifehacker 😉
    Myth vs. Fact: How Much Can Free PC Tweaks Improve Gaming Performance?

    #64067

    PeanutsRevenge
    Participant

    @| Hayt | wrote:

    That’s a familiar discussion, there was a thread just like that today on lifehacker 😉
    Myth vs. Fact: How Much Can Free PC Tweaks Improve Gaming Performance?

    Well, here’s a FACT for you underbathed french ass(that’s another fact BTW ;))!

    Quad SLI’d Mars IIs are required for this:

    http://www.google.com/pacman/

    BTW a Mars II is a dual GPU GTX 580 with 3GB memory, I BELIEVE 4 of these can be SLi’d for true QUAD SLI, which would actually be 8 GPUs 😀

    And fair play to google for that one, the code behind that must have taken an age of tweaking, let alone the actual writing of something that performs pretty much like pacman!!!

Viewing 15 posts - 16 through 30 (of 110 total)

You must be logged in to reply to this topic.