Social Pulse
Cuda, and that's not even really a problem for some use cases (standard llm inference)...
Cuda, and that's not even really a problem for some use cases (standard llm inference) if you just grab a maxed out mac studio. If you want to do stuff in parallel and play with more of the audio/visual side, then NVIDIA with cuda still has the edge in speed and compatibility.
InfraSocial Threadx.com