This is a HUGE topic - and as the comments say, there's no magic bullet.
I'll separate the response into two sections: architecture and process.
From an architecture point of view, there are a number of practices. Firstly, there is horizontal scaling - i.e. you add more servers, typically managed by a load balancer. This is a relatively cheap hardware solution, but requires you to know where your bottleneck is. The easiest horizontal scalability trick is adding more web servers; scaling database servers horizontally typically requires significant complexity, using techniques such as sharding. Horizontal scaling can improve your resilience as well as performance.
Vertical scalability basically means upgrading the hardware - more RAM, more CPU, SSD disks, etc. This is often the cheapest solution. It may also mean separating elements of the solution - e.g. separating the web server from the database server.
The next architectural solution is caching - this is a huge topic in its own right. Adding a CDN is a good first step; many CDN providers also offer "application accellerator" options, which effectively add as a reverse caching proxy (much like @Aviatrix recommends). Adding your own reverse caching proxy is often a solution for some weirdness in your own environment, or for offloading static file serving from your ASP.Net servers.
Of course, ASP.Net offers lots of caching options within the framework - make sure you read up on those and understand them; they give huge bang for buck. Also make sure you run your solution through a tool like YSlow to make sure you're setting the appropriate HTTP cache headers.
Another architectural solution that may or may not help is invoking external services asynchronously. If your solution depends on an external web service, calling that service synchronously basically limits your site to the capacity and resilience of the external system. For high-traffic solutions, that's not a good idea.
For very high scalability, many web sites use NoSQL for persistence - this is another huge topic, and there are many complex trade-offs.
From a process point of view, if scalability is a primary concern, you need to bake it into your development process. This means conducting regular performance and scalability assessments throughout the project, and building a measurement framework so you can decide which optimizations to pursue.
You need to be able to load test your solution - but load testing at production levels of traffic is usually commercially unrealistic, so you need to find an alternative solution - I regularly use JMeter with representative infrastructure. You also need to be able to find your bottlenecks under load - this may require instrumenting your code, and using a profiler (RedGate do a great one).
Most importantly is to have a process for evaluating trade-offs - nearly every performance/scalability improvement is at the expense of some other thing you care about. Load balancers cost money; reverse caching proxy solutions increase complexity; NoSQL requires new skills from your development team; "clever" coding practices often reduce maintainability. I recommend establishing your required baseline, building a measurement framework to evaluate your solution against that baseline, and profiling to identify the bottleneck. Each solution to improve scalability must address the current bottleneck, and I recommend a proof of concept stage to make sure the solution really does have the expected impact.
Finally, 10000 concurrent users isn't a particularly large number for most web applications on moderns hardware.