As U.S. voters wait to hear who the next president will be, Twitter, Facebook, Google and other internet firms will be busy doing something else: Monitoring their sites and deciding if and when to stop the spread of misinformation. After the 2016 U.S. election, in which internet firms were criticized for allowing foreign-sponsored actors to use their networks to spread misinformation, they vowed to take steps to better protect their sites. Once the coronavirus pandemic hit, companies began to more directly tackle misinformation related to the health crisis, observers say, and turned to more automated ways to moderate content, such as artificial intelligence. Those practices have carried over to efforts to address misinformation around the election, said Spandana Singh, a policy analyst with New America's Open Technology Institute. "A number of the policies and…