Yes, gc is amortized constant time. Suppose you have an algorithm which runs for time $n$ with a peak working set of size $k$. Now, note that you can allocate at most $O(n)$ words during the execution of the program, and that the time cost of running a copying garbage collector is $O(k)$ (ie, the cost of a gc is proportional to the total amount of live data). So if you run the gc at most $O(n/k)$ times, then the total runtime cost is bounded by $O(n)$, which means that the amortized cost of the gc is constant. So if you have a Cheney-style collector, with each semispace being size $2k$, then it's easy to see that a full collection can't be invoked more than once every $n/k$ steps, since allocating $k$ words takes $O(k)$ time, and the working set never exceeds size $k$, which gives you the bound you want. This justifies ignoring gc issues.
However, one case where the presence or absence of gc is not ignorable is when writing lock-free data structures. Many modern lock-free data structures deliberately leak memory and rely on gc for correctness. This is because at a high level, the way they work is to copy some data, make a change to it, and try to atomically update it with a CAS instruction, and run this in a loop until the CAS succeeds. Adding deterministic deallocation to these algorithms makes them much more complex, and so people often don't bother (esp. since they are often targeted at Java-like environments).
EDIT: If you want non-amortized bounds, the Cheney collector won't do it -- it scans the whole live set each time it is invoked. The keyword to google for is "real-time garbage collection", and Djikstra et al. and Steele gave the first real time mark-and-sweep collectors, and Baker gave the first real time compacting gc.
@article{dijkstra1978fly,
title={{On-the-fly garbage collection: An exercise in cooperation}},
author={Dijkstra, E.W. and Lamport, L. and Martin, A.J. and Scholten, C.S. and Steffens, E.F.M.},
journal={Communications of the ACM},
volume={21},
number={11},
pages={966--975},
issn={0001-0782},
year={1978},
publisher={ACM}
}
@article{steele1975multiprocessing,
title={{Multiprocessing compactifying garbage collection}},
author={Steele Jr, G.L.},
journal={Communications of the ACM},
volume={18},
number={9},
pages={495--508},
issn={0001-0782},
year={1975},
publisher={ACM}
}
@article{baker1978list,
title={{List processing in real time on a serial computer}},
author={Baker Jr, H.G.},
journal={Communications of the ACM},
volume={21},
number={4},
pages={280--294},
issn={0001-0782},
year={1978},
publisher={ACM}
}
List.mapin OCaml is actually quadratic complexity because stack depth is linear and the stack is traversed each time the nursery is evacuated. Same goes for major slices encountering large arrays of pointers. – J D Jan 30 '11 at 23:52