With the slowdown of Moore’s law, the need for more effective, specialized computing solutions in HEP is increasing. This is particularly true for real-time processing of data in LHC high-rate experiments. One promising solution is the use of FPGA, that are a standard solution to many problems in the industry where large computing power is needed in high-tech devices that are produced in limited quantities (avionics, medical scanners, advanced radars, etc...). FPGAs still have a quick pace of progress, and are well suitable for highly parallel, high-speed environments, and in recent times are gaining popularity also in HEP. The LHCb experiment is currently undergoing a major upgrade for Run-3, in which the complete detector will be read out and events fully reconstructed at the full LHC crossing rate, while at the same time planning for future runs at even higher luminosities. In this context, intense R&D is being performed on alternative solutions to traditional general-purpose computing. We present the current status of efforts towards the use of FPGAs for several tasks: reconstruction of clusters in the pixel detector, track reconstruction in the VELO, and reconstruction of large sections of the tracking. This includes both simulations and tests with actual hardware in the realistic environment of the first prototypes of LHCb's upgraded DAQ system.