It would take several pages to explain in detail all the different ways where you are utterly wrong. But in summary :
- I really do not know where to start with the notion that only CISC CPUs can perform "complex mathematical and algebraic operations" with "extreme precision and efficiency". SPARC, MIPS and PowerPC have lengthy track record here, being used for CGI in films, by the oil & gas industry, financial services sector etc etc.
- A RISC is not inherently a "low power part for small form factor devices". The earliest RISC CPUs were used to build servers, workstations and mainframes. IBM dominated enterprise computing with the RS/6000 workstation, and its S/390 CPUs were a CISC ISA running on a version of its POWER ISA RISC platform.
- CISC does not mean "able to do complicated things". CISC means "I have a complicated instruction set whose instructions may take several clock cycles to execute and which you may never use".
- I have no idea why you think the inability to emulate another instruction set at full speed rules out an architecture as being viable.
- the Motorola 68K and Itanium are not RISC architectures. 68K is "dead" because it can't run Windows, and Itanium was simply a poor design.
I remember life 25-30 years ago. Nobody in their right mind would have deployed x86 in the enterprise server space, it simply was not done. Every RISC CPU wiped the floor with x86 at the time. They lost because x86 was cheaper and could run Windows, and Intel were eventually able to hotrod their rubbish architecture to make it run fast.
These days, the CISC vs RISC thing does not matter. It was important in the 1980s/90s when chip real estate was at a premium, and RISC could use the space vacated by complex instructions to make simple instructions run much faster. Nowadays, everything including x86 is implemented on a RISC core with the higher level CISC instructions microcoded.