<TL;DR>
Is overloading supported in SQL Server: Nope. But even if it was, you would likely have issues related to source control and/or debugging.
Can a subset of parameters be passed in, altering the behavior: Yep, but you will have a single piece of code that won't be easy to work with or maintain.
Conclusion: Just create one Stored Procedure or Function per behavior. It will be:
- easier to work with via both T-SQL and app code
- easier to maintain (hence less error-prone)
- easier to debug
- easier to integrate into source control
- at least one thing you won't be blamed for after you leave or change teams / departments ;-)
There seems to be some confusion regarding how to interpret this Question with regards to the ultimate goal, and assumptions are being made on both sides, so let's start by clarifying what we are actually talking about.
Overloading
Overloading consists all, not some, of the following:
Code separation: Completely separate functions / methods.
Exact same function / method name for these separate pieces of code.
Unique number of, and/or combination of datatypes for, input parameters.
Execution of the shared name routes to the proper function / method based upon the signature -- the unique variation of input parameters.
Each separate function / method of the same name but differing input parameters can have a different datatype for the return value (note: this is merely an ability, not a requirement like the items noted above).
With this in mind, overloading is not possible (to any degree) in SQL Server, either for Stored Procedures or Functions (Scalar or TVF). Stored Procedures and Functions must have unique names (at least within a particular Schema). However, this restriction is just for user-defined objects as quite a few built-in Functions make use of overloading.
SQL Server does have "numbered" Stored Procedures (as pointed out in @Aaron's answer), but these should not be confused with overloads (not even partially) because they are missing everything that makes overloads different from those same, separate pieces of code having unique names:
While the "base" name of the Stored Procedure can be the same, this base name cannot be used to reference the code for execution; you are required to include the "number" (well, for ;1 it is optional), making it no different than having a vastly different name. Meaning, executing that name without any ;<number> following the name will always route to ProcName;1. Hence the "number" is actually part of the name.
In fact, the only time you can refer to the group without using the ;<number> syntax is in a DROP PROCEDURE statement, and then that is the only way to reference the group. Meaning, you cannot DROP PROCEDURE ProcName;3 but instead can only drop all of them.
The variations of input parameters are not required to be unique, as shown in this example:
CREATE PROCEDURE #NumberedProc (@Input INT)
AS
SELECT @Input AS [inp];
GO
CREATE PROCEDURE #NumberedProc;2 (@Input INT)
AS
SELECT @Input * 2 AS [h], 2 AS [g];
GO
EXEC #NumberedProc 55;
EXEC #NumberedProc;2 55;
Automatic routing to the appropriate module is not available, because it cannot be available due to the signatures being allowed to be the same. Hence the version number is required when executing.
True overloading is allowed in PostgreSQL for Functions, and it does work, but it also causes more pain than it is worth. It did, to a small degree, simplify the code from the app layer perspective, but it was very difficult to both manage in source control (assuming each object has a script named for the name of the object) as well as debug (you had to hunt down the particular signature from the error message). We eventually did away with the overloads and made every unique signature have a unique object name ;-).
Passing in a subset of parameters resulting in different behavior
You are allowed to have optional parameters that have default values (and the default can be NULL) for both Stored Procedures and Functions, but this is still a single Stored Procedure or Function. Meaning:
There is no separation of code; you will need a series of IF statements to properly route to the desired functionality. Assuming that the default values are set to NULL, you would be checking for which input parameters are NOT NULL.
Return types:
Scalar Functions can only have one return datatype. You could return SQL_VARIANT to avoid a specific return type, but there are issues with this approach: performance degradation and LOB types are not supported (i.e. VARCHAR(MAX), NVARCHAR(MAX), VARBINARY(MAX), and XML).
Table-Valued Functions can only have one return type, and it is determined at compile time.
Stored Procedures do allow for defining the result set structure at run-time, but that would be a very confusing interface to code to in the app layer.
Stored Procedures also allow for OUTPUT parameters, but you would need at least one per each datatype that you will be returning, and the calling code would need to know which one(s) to check based on which input parameters are being passed in. This would also be a confusing interface to code to.
Because this is a single object, you don't have the management and debugging issues like you would with true overloads, but you now have a large, messy, convoluted Stored Procedure or Function that is also hard to maintain, just for different reasons.
It would be something like:
CREATE PROCEDURE dbo.DoTooMuchInOneProcedure
(
@Int1 INT = NULL,
@Int2 INT = NULL,
@String1 NVARCHAR(50) = NULL,
@IntOut INT = NULL OUTPUT,
@StringOut NVARCHAR(50) = NULL OUTPUT
)
AS
SET NOCOUNT ON;
IF (@Int1 IS NOT NULL AND @Int2 IS NULL AND @String1 IS NULL)
BEGIN
SET @IntOut = @Int1;
RETURN;
END;
IF (@Int1 IS NOT NULL AND @Int2 IS NOT NULL)
BEGIN
SET @IntOut = @Int1 + @Int2;
RETURN;
END;
IF (@Int1 IS NOT NULL AND @String1 IS NOT NULL)
BEGIN
SET @StringOut = @String1 + CONVERT(NVARCHAR(50), @Int1);
RETURN;
END;