http://www.techwatch.co.uk/2010/07/27/apple-updates-imacs-and-releases-12-core-mac-pro/
Don't you hate headline grabbing journalism? x2 CPUs with 6 cores, is not a 12 core laptop, it's a dual CPU laptop. Gits.
Still, if any of you kind gentlemen would care to purchase this for me, I may buy you a pint ;D
Technically they are right Niall.
Sorry :out:
Not in the way they're advertising the articles. It's implying a 12 core processor. It should read dual hex core laptop.
I'm taking your reply (and all others) as a sign you will be getting me one ;D
It's not a laptop either.
Sorry Niall, but they're right, it's not a laptop and 12 cores is entirely accurate, since it has got 12 cores. You jumped to a conclusion and when the article didn't support it you've blamed the article instead of admitting your mistake to yourself. :P Nar nar nee nar nar etc. ;)
I see the starting price in the US for the 12 core will be $4999 :eek4:
I have to agree with Mitch and giz, the headline says 12 cores and the machine has 12 cores. To me, it doesn't matter if that is on 1 cpu or 12, its still as accurate as can be.
Still, the machine must be pretty quick!
We all want one. :)
I don't. I want several! ;D
:lol:
Hands and feet job, eh?
Excuse me while I connect my HPC to my ADSL connection.
With 12 cores you could rent out capacity.
'now available for nuclear test modelling'
;D
Quote from: pctech on Jul 28, 2010, 16:45:46
Excuse me while I connect my HPC to my ADSL connection.
With 12 cores you could rent out capacity.
'now available for nuclear test modelling'
Mitch, I think you may need to add a few more cores for your modelling ;D
http://www.computerweekly.com/Articles/2009/03/09/234614/World39s-biggest-computer-built-for-US-nuclear-department.htm
Yes perhaps
I've been trying ever so hard, but even though I can afford it and don't have to worry about anyone else's opinion, I really can't justify a Mac Pro :bawl:
A quad-core 27" iMac on the other hand... :rub:
:hehe:
Would you like some help, Bill? We can justify anything given a little time. :evil:
I think it would be very useful in winter,it would keep the room nice and warm.
;D
Quote from: Steve on Jul 28, 2010, 18:19:54
I think it would be very useful in winter,it would keep the room nice and warm.
I know what would happen there- the cat would go to sleep on it and I'd be forever digging fur out of the fan!
I've got about a fortnight before the next statement date on my credit card, time for some serious thinking :P
We will always be here to help, Bill. ;D
I've never known Visa to need any help :eek4:
I managed to get to over £10,000 loading the options up at least delivery is free.
Quote from: Bill on Jul 28, 2010, 19:00:10
I've never known Visa to need any help :eek4:
;D
Ah, but you might. :evil:
Quote from: Steve on Jul 28, 2010, 19:01:43
I managed to get to over £10,000 loading the options up at least delivery is free.
That's the trouble with internet shopping- a few mouse clicks can get damned expensive!
I get very confused with this laptop (16")... it has one physical processor, four logical processors (ie. quad-core), but eight virtual processors (due to hyperthreading). Task Manager looks a little cramped, and most of the time I have nothing that runs 8 threads... sad but true.
It will be interesting to see how long it takes the software to catch up with the hardware, won't it.
We have some software here used for the modelling of binary star systems, and it will literally eat a single CPU for hours on a single run (and sometimes you have to do it dozens of times, tweaking by hand after each solution). Since I have a 2x4 Intel Xeon rig with a stack of RAM in it, we thought since it was our own code, we could make it MP (multiprocessor). In the most naieve way possible, we ran Valgrind (code profiler) on it, found the most intensive algorithms where the CPU sat, and wrapped them in OpenMP directives, namely each cycle of the internal loop of code should go to another processor. You would think you'd suddenly be at 800%.. but on average the code now ran at 133%, ie. it was using 1+1/3 processors for its effective runtime. Not particularly impressive. Of course, this was a very simple implementation and a 33% benefit is better than nothing, but it does go to show that it will probably need a hefty rewrite to accomplish a lot better utilisation. It's just not easy... there's a lot of subtle interactions, and when anything has to fetch data from the disc -- it all goes to hell. But that's always been the case.
Thanks for the insight. :thumb:
Oo that's interesting. What do you do? The nerd in me needs to know :D
Probably works for Jodrell Bank (http://www.jb.man.ac.uk/)
Quote from: esh on Jul 29, 2010, 10:47:43
We have some software here used for the modelling of binary star systems, and it will literally eat a single CPU for hours on a single run (and sometimes you have to do it dozens of times, tweaking by hand after each solution). Since I have a 2x4 Intel Xeon rig with a stack of RAM in it, we thought since it was our own code, we could make it MP (multiprocessor). In the most naieve way possible, we ran Valgrind (code profiler) on it, found the most intensive algorithms where the CPU sat, and wrapped them in OpenMP directives, namely each cycle of the internal loop of code should go to another processor. You would think you'd suddenly be at 800%.. but on average the code now ran at 133%, ie. it was using 1+1/3 processors for its effective runtime. Not particularly impressive. Of course, this was a very simple implementation and a 33% benefit is better than nothing, but it does go to show that it will probably need a hefty rewrite to accomplish a lot better utilisation. It's just not easy... there's a lot of subtle interactions, and when anything has to fetch data from the disc -- it all goes to hell. But that's always been the case.
You said you tweak it by hand, would it not be more efficient to run 4 separate processes each with a different set of variables? Then you utilise 100% of each of the cores. I suppose it's no good if your researching just one system. Even then, you can reach an average/correct result quicker. By selecting a high variable, a mid and a low, and then going off the closest of the three from there. Should give you less runs to get to the correct variables.
BTW, IANAS (I am not a scientist) so take that with a pinch of salt. ;)
A lot of video (and music) encoding software now supports mutiple processors/cores. DBpoweramp is a good example - say you are re-encoding a load of music, it will do each music file on a seperate core up to the number of cores in your machine.
That's the one program I really miss on the Mac and works exactly as you say,speeds up the transcode dramatically on a multicore.
Yes, you can run multiple processes simultaneously (careful not to let them overwrite each other's results, of course I never made that mistake!). In fact, we often do when running an MCMC simulation using the program (to achieve a distribution of the variables, ie. variance and error). The problem of recent was in fact the data was too good; we got preliminary downlink from Kepler (a space satellite) and because it is just so unbelievably precise we needed millions of grid points to model what we were seeing, this shoots up the computational time from 'hours' to 'weeks'. The stuff you get off of there is just amazing.
Sounds like you have a very interesting job. Want to switch jobs? ;D
Will this machine be powerful enough for my iTunes and checking email?
:welc: :karma:
Quote from: aurichie on Aug 06, 2010, 00:03:42
Will this machine be powerful enough for my iTunes and checking email?
Definitely not. For that, you'd need 24 cores. ;)
Hi and welcome to the forum. :welc: :karma: