Gibbs sampling is a Markov Chain Monte Carlo (MCMC) method often used in Bayesian learning. It is widely believed that MCMC methods are difficult to deploy on parallel and distributed systems due to their inherently sequential nature. In this paper, we examine Asynchronous Gibbs sampling - a scheme that achieves parallelism by simply ignoring sequential requirements. This method has been shown to produce good empirical results in some problems, and is especially attractive in settings, such as hierarchical random-effects modeling using data augmentation, where the problem dimension grows with the sample size. Unfortunately, the algorithm has also been shown to diverge in other settings. In this paper we introduce a theoretical framework for analyzing Asynchronous Gibbs sampling and other extensions of MCMC that do not possess the Markov property. We prove that Asynchronous Gibbs can be made to always converge through a minor modification - we call this the Exact Asynchronous Gibbs algorithm. We provide examples that illustrate some of the algorithm’s properties with respect to scaling, and an example that compares the exact and approximate algorithms.