The general approach to problems of this kind is to maximize the (regularized) likelihood of your data.
LL(y0,a,b,σ0,c,d)=∑i=1nlogϕ(yi,y0+axi+bti,σ0+cxi+dti)
where
ϕ(x,μ,σ)=12π−−√σe−(x−μ)22σ2
You can code this expression into a function in your favorite statistical package (I would prefer Python, R or Stata, for I never did programming in SPSS). Then you can feed it to a numerical optimizer, which will estimate optimal value θ^ of your parameters θ=(y0,a,b,σ0,c,d).
If you need confidence intervals, this optimizer can also estimate Hessian matrix H of θ (second derivatives) around the optimum. Theory of maximum likelihood estimation says that for large n covariance matrix of θ^ may be estimated as H−1.
Here is an example code in Python:
import scipy
import numpy as np
# generate toy data for the problem
np.random.seed(1) # fix random seed
n = 1000 # fix problem size
x = np.random.normal(size=n)
t = np.random.normal(size=n)
mean = 1 + x * 2 + t * 3
std = 4 + x * 0.5 + t * 0.6
y = np.random.normal(size=n, loc=mean, scale=std)
# create negative log likelihood
def neg_log_lik(theta):
est_mean = theta[0] + x * theta[1] + t * theta[2]
est_std = np.maximum(theta[3] + x * theta[4] + t * theta[5], 1e-10)
return -sum(scipy.stats.norm.logpdf(y, loc=est_mean, scale=est_std))
# maximize
initial = np.array([0,0,0,1,0,0])
result = scipy.optimize.minimize(neg_log_lik, initial)
# extract point estimation
param = result.x
print(param)
# extract standard error for confidence intervals
std_error = np.sqrt(np.diag(result.hess_inv))
print(std_error)
Notice that your problem formulation can produce negative σ, and I had to defend myself from it by brute force replacement of too small σ with 10−10.
The result (parameter estimates and their standard errors) produced by the code is:
[ 0.8724218 1.75510897 2.87661843 3.88917283 0.63696726 0.5788625 ]
[ 0.15073344 0.07351353 0.09515104 0.08086239 0.08422978 0.0853192 ]
You can see that estimates are close to their true values, which confirms correctness of this simulation.