I haven't seen this written down, but I'd define an algorithm as "a series of well-defined steps for solving a problem". By well-defined, I mean that each step is either an algorithm itself (like a subroutine), or else a simple and completely mathematically unambiguous operation.
A more precise mathematical definition can follow by defining your model of computation, which mathematically specifies exactly which steps are allowed and in what sequence. For example, a Turing Machine formally specifies what steps it may take and the order it takes them in. Similarly for, say, a pushdown automata, and close to similarly for many programming languages.
One cannot have a universal definition that encompasses all possible models of computation, unless one somehow could formalize the notion of "all possible models of computation", which seems impossible. For example, tomorrow I may invent a model of computation involving dropping beads into buckets, and as long as I formally specify the model of what steps can be taken and in what order, one could write algorithms for my new type of computer.
The Church-Turing thesis is related: It is a proposed natural law stating that any such model of computation could be "simulated" by a Turing Machine. So in that sense, my algorithm for my new type of computer would be no more "powerful" than algorithms for Turing Machines. Because of this, if one accepts the Church-Turing thesis, then it is without loss of generality to define an algorithm as a type of Turing Machine. (But on the other hand, in theoretical CS we often think of models beyond Turing Machines, such as TMs with access to randomness, access to oracles for the halting problem, etc.)