Large Language Models (LLMs) have shown impressive performance as chatbots, and are hence used by millions of people worldwide. This, however, brings their safety and trustworthiness to the forefront, making it imperative to guarantee their reliability. Prior work has generally focused on establishing the trust in LLMs using evaluations on standard benchmarks. This analysis, however, is insufficient due to the limitations of the benchmarking datasets, their use in LLMs' safety training, and the lack of guarantees through benchmarking. As an alternative, we propose quantitative certificates for LLMs. We present the first family of frameworks, LLMCert, that gives formal guarantees on the behaviors of LLMs. Individual frameworks certify LLMs for counterfactual bias and knowledge comprehension. We provide details of the individual frameworks in their respective project pages given below.