Stochastic Approximation is a method to solve for fixed points (or roots) of operators. Originally proposed in the 50s, it has been at the heart of a large class of algorithms in a variety of domains including control theory, signal processing, optimization and machine learning. The famous stochastic gradient descent is a special case of Stochastic Approximatioon. The goal of this project is on developing finite time bounds on the convergence error of stochastic approximation. Such bounds have immediate application in Reinforcement Learning. Consider a contarctive operator. We know from the Banach fixed point theorem that repeatedly applying it starting from some initial point leads to geometrically fast convergence to the fixed point. Turns out that a small modification of this approach also works for nonexpansive operators (assuming they have a fixed point) albeit at a slower rate. Our focus is on the case when we have noise. In this case, the contractive case is well understood, i.e., tight convergence rates are known. However, in the nonexpansive case, prior work either obtains tight bounds in special cases or obtains loose bounds in the general case. The goal of this project is to obtain tight bounds in the general nonexpansive case.