soc/amd/common/cpu/smm/smm_relocate: don't assume TSEG is below 4GB

Even though right now TSEG will always be located below 4GB, better not
make assumptions in the SMM relocation code. Instead of clearing the
higher 32 bits and just assigning the TSEG base and per-core SMM base to
the lower 32 bits of the MSR, assign those two base addresses to the raw
64 bit MSR value to not truncate the base addresses. Since TSEG will
realistically never be larger than 4GB and it needs to be aligned to its
power-of-two size, the TSEG mask still only needs to affect the lower
half of the corresponding MSR value.

Signed-off-by: Felix Held <felix-coreboot@felixheld.de>
Change-Id: I1004b5e05a7dba83b76b93b3e7152aef7db58f4d
Reviewed-on: https://review.coreboot.org/c/coreboot/+/73639
Tested-by: build bot (Jenkins) <no-reply@coreboot.org>
Reviewed-by: Martin Roth <martin.roth@amd.corp-partner.google.com>
Reviewed-by: Fred Reitberger <reitbergerfred@gmail.com>
Reviewed-by: Eric Lai <eric_lai@quanta.corp-partner.google.com>
Reviewed-by: Arthur Heymans <arthur@aheymans.xyz>
diff --git a/src/soc/amd/common/block/cpu/smm/smm_relocate.c b/src/soc/amd/common/block/cpu/smm/smm_relocate.c
index 4d33b65..1b929c7 100644
--- a/src/soc/amd/common/block/cpu/smm/smm_relocate.c
+++ b/src/soc/amd/common/block/cpu/smm/smm_relocate.c
@@ -65,8 +65,7 @@
 	smm_region(&tseg_base, &tseg_size);
 
 	msr_t msr;
-	msr.lo = tseg_base;
-	msr.hi = 0;
+	msr.raw = tseg_base;
 	wrmsr(SMM_ADDR_MSR, msr);
 
 	msr.lo = ~(tseg_size - 1);
@@ -76,8 +75,7 @@
 
 	uintptr_t smbase = smm_get_cpu_smbase(cpu_index());
 	msr_t smm_base = {
-		.hi = 0,
-		.lo = smbase
+		.raw = smbase
 	};
 	wrmsr(SMM_BASE_MSR, smm_base);