wspgl1

PURPOSE ^

SPGL1 Solve basis pursuit, basis pursuit denoise, and LASSO

SYNOPSIS ^

function [x,r,g,info] = wspgl1( A, b, tau, sigma, x, options )

DESCRIPTION ^

SPGL1  Solve basis pursuit, basis pursuit denoise, and LASSO

 [x, r, g, info] = wspgl1(A, b, tau, sigma, x0, options)

 ---------------------------------------------------------------------
 Solve a MODIFIED basis pursuit denoise (BPDN) problem

 (BPDN)   minimize  ||x||_1  subj to  ||Ax-b||_2 <= sigma,

 or the l1-regularized least-squares problem

 (LASSO)  minimize  ||Ax-b||_2  subj to  ||x||_1 <= tau.

 The modification lies in implementing a weighting strategy whithin the
 LASSO subproblems employed to solve the BPDN problem. A full description
 of this algorithm can be found in the paper:

 Hassan Mansour, "Beyond $\ell_1$ minimization for sparse signal
 recovery," IEEE Workshop on Statistical Signal Processing (SSP), Aug.
 2012
 ---------------------------------------------------------------------

 REMARKS AND AUTHORSHIP
 ======================

 30 May 12: The WSPGL1 algorithm differs from SPGL1 by introducing the 
           parameter 'omega' in the options structure. Omega specifies the
           weight that is applied to the support of the solution of each 
           LASSO subproblem.

 This code was modified by Hassan Mansour (hassanm@cs.ubc.ca)
 Copyright (C) 2012 Hassan Mansour
 Computer Science and Mathematics Departments, 
 University of British Columbia, Canada.
 All rights reserved. E-mail: <{hassanm}@cs.ubc.ca>.

 The following documentation and credits pertain to the original SPGL1 release.

 ---------------------------------------------------------------------

 INPUTS
 ======
 A        is an m-by-n matrix, explicit or an operator.
          If A is a function, then it must have the signature

          y = A(x,mode)   if mode == 1 then y = A x  (y is m-by-1);
                          if mode == 2 then y = A'x  (y is n-by-1).

 b        is an m-vector.
 tau      is a nonnegative scalar; see (LASSO).
 sigma    if sigma != inf or != [], then spgl1 will launch into a
          root-finding mode to find the tau above that solves (BPDN).
          In this case, it's STRONGLY recommended that tau = 0.
 x0       is an n-vector estimate of the solution (possibly all
          zeros). If x0 = [], then SPGL1 determines the length n via
          n = length( A'b ) and sets  x0 = zeros(n,1).
 options  is a structure of options from spgSetParms. Any unset options
          are set to their default value; set options=[] to use all
          default values.

 OUTPUTS
 =======
 x        is a solution of the problem
 r        is the residual, r = b - Ax
 g        is the gradient, g = -A'r
 info     is a structure with the following information:
          .tau     final value of tau (see sigma above)
          .rNorm   two-norm of the optimal residual
          .rGap    relative duality gap (an optimality measure)
          .gNorm   Lagrange multiplier of (LASSO)
          .stat    = 1 found a BPDN solution
                   = 2 found a BP sol'n; exit based on small gradient
                   = 3 found a BP sol'n; exit based on small residual
                   = 4 found a LASSO solution
                   = 5 error: too many iterations
                   = 6 error: linesearch failed
                   = 7 error: found suboptimal BP solution
                   = 8 error: too many matrix-vector products
          .time    total solution time (seconds)
          .nProdA  number of multiplications with A
          .nProdAt number of multiplications with A'

 OPTIONS
 =======
 Use the options structure to control various aspects of the algorithm:

 options.fid         File ID to direct log output
        .verbosity   0=quiet, 1=some output, 2=more output.
        .iterations  Max. number of iterations (default if 10*m).
        .bpTol       Tolerance for identifying a basis pursuit solution.
        .optTol      Optimality tolerance (default is 1e-4).
        .decTol      Larger decTol means more frequent Newton updates.
        .subspaceMin 0=no subspace minimization, 1=subspace minimization.

 EXAMPLE
 =======
   m = 120; n = 512; k = 20; % m rows, n cols, k nonzeros.
   p = randperm(n); x0 = zeros(n,1); x0(p(1:k)) = sign(randn(k,1));
   A  = randn(m,n); [Q,R] = qr(A',0);  A = Q';
   b  = A*x0 + 0.005 * randn(m,1);
   opts = spgSetParms('optTol',1e-4);
   [x,r,g,info] = spgl1(A, b, 0, 1e-3, [], opts); % Find BP sol'n.

 AUTHORS
 =======
  Ewout van den Berg (ewout78@cs.ubc.ca)
  Michael P. Friedlander (mpf@cs.ubc.ca)
    Scientific Computing Laboratory (SCL)
    University of British Columbia, Canada.

 BUGS
 ====
 Please send bug reports or comments to
            Michael P. Friedlander (mpf@cs.ubc.ca)
            Ewout van den Berg (ewout78@cs.ubc.ca)

CROSS-REFERENCE INFORMATION ^

This function calls: This function is called by:

SUBFUNCTIONS ^

SOURCE CODE ^

0001 function [x,r,g,info] = wspgl1( A, b, tau, sigma, x, options )
0002 %SPGL1  Solve basis pursuit, basis pursuit denoise, and LASSO
0003 %
0004 % [x, r, g, info] = wspgl1(A, b, tau, sigma, x0, options)
0005 %
0006 % ---------------------------------------------------------------------
0007 % Solve a MODIFIED basis pursuit denoise (BPDN) problem
0008 %
0009 % (BPDN)   minimize  ||x||_1  subj to  ||Ax-b||_2 <= sigma,
0010 %
0011 % or the l1-regularized least-squares problem
0012 %
0013 % (LASSO)  minimize  ||Ax-b||_2  subj to  ||x||_1 <= tau.
0014 %
0015 % The modification lies in implementing a weighting strategy whithin the
0016 % LASSO subproblems employed to solve the BPDN problem. A full description
0017 % of this algorithm can be found in the paper:
0018 %
0019 % Hassan Mansour, "Beyond $\ell_1$ minimization for sparse signal
0020 % recovery," IEEE Workshop on Statistical Signal Processing (SSP), Aug.
0021 % 2012
0022 % ---------------------------------------------------------------------
0023 %
0024 % REMARKS AND AUTHORSHIP
0025 % ======================
0026 %
0027 % 30 May 12: The WSPGL1 algorithm differs from SPGL1 by introducing the
0028 %           parameter 'omega' in the options structure. Omega specifies the
0029 %           weight that is applied to the support of the solution of each
0030 %           LASSO subproblem.
0031 %
0032 % This code was modified by Hassan Mansour (hassanm@cs.ubc.ca)
0033 % Copyright (C) 2012 Hassan Mansour
0034 % Computer Science and Mathematics Departments,
0035 % University of British Columbia, Canada.
0036 % All rights reserved. E-mail: <{hassanm}@cs.ubc.ca>.
0037 %
0038 % The following documentation and credits pertain to the original SPGL1 release.
0039 %
0040 % ---------------------------------------------------------------------
0041 %
0042 % INPUTS
0043 % ======
0044 % A        is an m-by-n matrix, explicit or an operator.
0045 %          If A is a function, then it must have the signature
0046 %
0047 %          y = A(x,mode)   if mode == 1 then y = A x  (y is m-by-1);
0048 %                          if mode == 2 then y = A'x  (y is n-by-1).
0049 %
0050 % b        is an m-vector.
0051 % tau      is a nonnegative scalar; see (LASSO).
0052 % sigma    if sigma != inf or != [], then spgl1 will launch into a
0053 %          root-finding mode to find the tau above that solves (BPDN).
0054 %          In this case, it's STRONGLY recommended that tau = 0.
0055 % x0       is an n-vector estimate of the solution (possibly all
0056 %          zeros). If x0 = [], then SPGL1 determines the length n via
0057 %          n = length( A'b ) and sets  x0 = zeros(n,1).
0058 % options  is a structure of options from spgSetParms. Any unset options
0059 %          are set to their default value; set options=[] to use all
0060 %          default values.
0061 %
0062 % OUTPUTS
0063 % =======
0064 % x        is a solution of the problem
0065 % r        is the residual, r = b - Ax
0066 % g        is the gradient, g = -A'r
0067 % info     is a structure with the following information:
0068 %          .tau     final value of tau (see sigma above)
0069 %          .rNorm   two-norm of the optimal residual
0070 %          .rGap    relative duality gap (an optimality measure)
0071 %          .gNorm   Lagrange multiplier of (LASSO)
0072 %          .stat    = 1 found a BPDN solution
0073 %                   = 2 found a BP sol'n; exit based on small gradient
0074 %                   = 3 found a BP sol'n; exit based on small residual
0075 %                   = 4 found a LASSO solution
0076 %                   = 5 error: too many iterations
0077 %                   = 6 error: linesearch failed
0078 %                   = 7 error: found suboptimal BP solution
0079 %                   = 8 error: too many matrix-vector products
0080 %          .time    total solution time (seconds)
0081 %          .nProdA  number of multiplications with A
0082 %          .nProdAt number of multiplications with A'
0083 %
0084 % OPTIONS
0085 % =======
0086 % Use the options structure to control various aspects of the algorithm:
0087 %
0088 % options.fid         File ID to direct log output
0089 %        .verbosity   0=quiet, 1=some output, 2=more output.
0090 %        .iterations  Max. number of iterations (default if 10*m).
0091 %        .bpTol       Tolerance for identifying a basis pursuit solution.
0092 %        .optTol      Optimality tolerance (default is 1e-4).
0093 %        .decTol      Larger decTol means more frequent Newton updates.
0094 %        .subspaceMin 0=no subspace minimization, 1=subspace minimization.
0095 %
0096 % EXAMPLE
0097 % =======
0098 %   m = 120; n = 512; k = 20; % m rows, n cols, k nonzeros.
0099 %   p = randperm(n); x0 = zeros(n,1); x0(p(1:k)) = sign(randn(k,1));
0100 %   A  = randn(m,n); [Q,R] = qr(A',0);  A = Q';
0101 %   b  = A*x0 + 0.005 * randn(m,1);
0102 %   opts = spgSetParms('optTol',1e-4);
0103 %   [x,r,g,info] = spgl1(A, b, 0, 1e-3, [], opts); % Find BP sol'n.
0104 %
0105 % AUTHORS
0106 % =======
0107 %  Ewout van den Berg (ewout78@cs.ubc.ca)
0108 %  Michael P. Friedlander (mpf@cs.ubc.ca)
0109 %    Scientific Computing Laboratory (SCL)
0110 %    University of British Columbia, Canada.
0111 %
0112 % BUGS
0113 % ====
0114 % Please send bug reports or comments to
0115 %            Michael P. Friedlander (mpf@cs.ubc.ca)
0116 %            Ewout van den Berg (ewout78@cs.ubc.ca)
0117 
0118 % 15 Apr 07: First version derived from spg.m.
0119 %            Michael P. Friedlander (mpf@cs.ubc.ca).
0120 %            Ewout van den Berg (ewout78@cs.ubc.ca).
0121 % 17 Apr 07: Added root-finding code.
0122 % 18 Apr 07: sigma was being compared to 1/2 r'r, rather than
0123 %            norm(r), as advertised.  Now immediately change sigma to
0124 %            (1/2)sigma^2, and changed log output accordingly.
0125 % 24 Apr 07: Added quadratic root-finding code as an option.
0126 % 24 Apr 07: Exit conditions need to guard against small ||r||
0127 %            (ie, a BP solution).  Added test1,test2,test3 below.
0128 % 15 May 07: Trigger to update tau is now based on relative difference
0129 %            in objective between consecutive iterations.
0130 % 15 Jul 07: Added code to allow a limited number of line-search
0131 %            errors.
0132 % 23 Feb 08: Fixed bug in one-norm projection using weights. Thanks
0133 %            to Xiangrui Meng for reporting this bug.
0134 % 26 May 08: The simple call spgl1(A,b) now solves (BPDN) with sigma=0.
0135     
0136 %   spgl1.m
0137 %   $Id: spgl1.m 1225 2009-01-30 20:36:31Z ewout78 $
0138 %
0139 %   ----------------------------------------------------------------------
0140 %   This file is part of SPGL1 (Spectral Projected-Gradient for L1).
0141 %
0142 %   Copyright (C) 2007 Ewout van den Berg and Michael P. Friedlander,
0143 %   Department of Computer Science, University of British Columbia, Canada.
0144 %   All rights reserved. E-mail: <{ewout78,mpf}@cs.ubc.ca>.
0145 %
0146 %   SPGL1 is free software; you can redistribute it and/or modify it
0147 %   under the terms of the GNU Lesser General Public License as
0148 %   published by the Free Software Foundation; either version 2.1 of the
0149 %   License, or (at your option) any later version.
0150 %
0151 %   SPGL1 is distributed in the hope that it will be useful, but WITHOUT
0152 %   ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
0153 %   or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Lesser General
0154 %   Public License for more details.
0155 %
0156 %   You should have received a copy of the GNU Lesser General Public
0157 %   License along with SPGL1; if not, write to the Free Software
0158 %   Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301
0159 %   USA
0160 %   ----------------------------------------------------------------------
0161 REVISION = '$Revision: 1017 $';
0162 DATE     = '$Date: 2012-05-30 11:11:11 -0800 (Wed, 30 May 2012) $';
0163 REVISION = REVISION(11:end-1);
0164 DATE     = DATE(35:50);
0165 
0166 tic;              % Start your watches!
0167 m = length(b);
0168 
0169 %----------------------------------------------------------------------
0170 % Check arguments.
0171 %----------------------------------------------------------------------
0172 if ~exist('options','var'), options = []; end
0173 if ~exist('x','var'), x = []; end
0174 if ~exist('sigma','var'), sigma = []; end
0175 if ~exist('tau','var'), tau = []; end
0176 
0177 if nargin < 2 || isempty(b) || isempty(A)
0178    error('At least two arguments are required');
0179 elseif isempty(tau) && isempty(sigma)
0180    tau = 0;
0181    sigma = 0;
0182    singleTau = false;
0183 elseif isempty(sigma) % && ~isempty(tau)  <-- implied
0184    singleTau = true;
0185 else
0186    if isempty(tau)
0187       tau = 0;
0188    end
0189    singleTau = false;
0190 end
0191 
0192 %----------------------------------------------------------------------
0193 % Grab input options and set defaults where needed.
0194 %----------------------------------------------------------------------
0195 defaultopts = wspgSetParms(...
0196 'fid'        ,      1 , ... % File ID for output
0197 'verbosity'  ,      2 , ... % Verbosity level
0198 'iterations' ,   10*m , ... % Max number of iterations
0199 'nPrevVals'  ,      3 , ... % Number previous func values for linesearch
0200 'bpTol'      ,  1e-06 , ... % Tolerance for basis pursuit solution 
0201 'optTol'     ,  1e-04 , ... % Optimality tolerance
0202 'decTol'     ,  1e-04 , ... % Req'd rel. change in primal obj. for Newton
0203 'stepMin'    ,  1e-16 , ... % Minimum spectral step
0204 'stepMax'    ,  1e+05 , ... % Maximum spectral step
0205 'rootMethod' ,      2 , ... % Root finding method: 2=quad,1=linear (not used).
0206 'activeSetIt',    Inf , ... % Exit with EXIT_ACTIVE_SET if nnz same for # its.
0207 'subspaceMin',      0 , ... % Use subspace minimization
0208 'iscomplex'  ,    NaN , ... % Flag set to indicate complex problem
0209 'maxMatvec'  ,    Inf , ... % Maximum matrix-vector multiplies allowed
0210 'weights'    ,      1 , ... % Weights W in ||Wx||_1
0211 'omega'      ,    0.3 , ... % Subroutine weight omega that modifies the LASSO subproblems %% WSPGL1 _ Hassan Mansour
0212 'project'    , @NormL1_project , ...
0213 'primal_norm', @NormL1_primal  , ...
0214 'dual_norm'  , @NormL1_dual      ...
0215    );
0216 options = wspgSetParms(defaultopts, options);
0217 
0218 fid           = options.fid;
0219 logLevel      = options.verbosity;
0220 maxIts        = options.iterations;
0221 nPrevVals     = options.nPrevVals;
0222 bpTol         = options.bpTol;
0223 optTol        = options.optTol;
0224 decTol        = options.decTol;
0225 stepMin       = options.stepMin;
0226 stepMax       = options.stepMax;
0227 activeSetIt   = options.activeSetIt;
0228 subspaceMin   = options.subspaceMin;
0229 maxMatvec     = max(3,options.maxMatvec);
0230 weights       = options.weights;
0231 omega         = options.omega; % WSPGL1 _ Hassan Mansour
0232 
0233 maxLineErrors = 10;     % Maximum number of line-search failures.
0234 pivTol        = 1e-12;  % Threshold for significant Newton step.
0235 
0236 %----------------------------------------------------------------------
0237 % Initialize local variables.
0238 %----------------------------------------------------------------------
0239 iter          = 0;  itnTotLSQR = 0; % Total SPGL1 and LSQR iterations.
0240 nProdA        = 0;  nProdAt    = 0;
0241 lastFv        = -inf(nPrevVals,1);  % Last m function values.
0242 nLineTot      = 0;                  % Total no. of linesearch steps.
0243 printTau      = false;
0244 nNewton       = 0;
0245 bNorm         = norm(b,2);
0246 stat          = false;
0247 timeProject   = 0;
0248 timeMatProd   = 0;
0249 nnzIter       = 0;                  % No. of its with fixed pattern.
0250 nnzIdx        = [];                 % Active-set indicator.
0251 subspace      = false;              % Flag if did subspace min in current itn.
0252 stepG         = 1;                  % Step length for projected gradient.
0253 testUpdateTau = 0;                  % Previous step did not update tau
0254 
0255 % Determine initial x, vector length n, and see if problem is complex
0256 explicit = ~(isa(A,'function_handle'));
0257 if isempty(x)
0258    if isnumeric(A)
0259       n = size(A,2);
0260       realx = isreal(A) && isreal(b);
0261    else
0262       x = Aprod(b,2);
0263       n = length(x);
0264       realx = isreal(x) && isreal(b);
0265    end
0266    x = zeros(n,1);
0267 else
0268    n     = length(x);
0269    realx = isreal(x) && isreal(b);
0270 end
0271 if isnumeric(A), realx = realx && isreal(A); end;
0272 
0273 % Override options when options.iscomplex flag is set
0274 if (~isnan(options.iscomplex)), realx = (options.iscomplex == 0); end
0275 
0276 % Check if all weights (if any) are strictly positive. In previous
0277 % versions we also checked if the number of weights was equal to
0278 % n. In the case of multiple measurement vectors, this no longer
0279 % needs to apply, so the check was removed.
0280 if ~isempty(weights)
0281   if any(~isfinite(weights))
0282      error('Entries in options.weights must be finite');
0283   end
0284   if any(weights <= 0)
0285      error('Entries in options.weights must be strictly positive');
0286   end
0287 else
0288   weights = 1;
0289 end
0290 
0291 % Quick exit if sigma >= ||b||.  Set tau = 0 to short-circuit the loop.
0292 if bNorm <= sigma
0293    printf('W: sigma >= ||b||.  Exact solution is x = 0.\n');
0294    tau = 0;  singleTau = true;
0295 end 
0296   
0297 % Don't do subspace minimization if x is complex.
0298 if ~realx && subspaceMin
0299    printf('W: Subspace minimization disabled when variables are complex.\n');
0300    subspaceMin = false;
0301 end
0302 
0303 % Pre-allocate iteration info vectors
0304 xNorm1 = zeros(min(maxIts,10000),1);
0305 rNorm2 = zeros(min(maxIts,10000),1);
0306 lambda = zeros(min(maxIts,10000),1);
0307 
0308 %{WSPGL1 _ Hassan Mansour
0309 xNorm1_1 = [];
0310 rNorm2_1 = [];
0311 lambda_1 = [];
0312 %}
0313 
0314 % Exit conditions (constants).
0315 EXIT_ROOT_FOUND    = 1;
0316 EXIT_BPSOL1_FOUND  = 2;
0317 EXIT_BPSOL2_FOUND  = 3;
0318 EXIT_OPTIMAL       = 4;
0319 EXIT_ITERATIONS    = 5;
0320 EXIT_LINE_ERROR    = 6;
0321 EXIT_SUBOPTIMAL_BP = 7;
0322 EXIT_MATVEC_LIMIT  = 8;
0323 EXIT_ACTIVE_SET    = 9; % [sic]
0324 
0325 %----------------------------------------------------------------------
0326 % Log header.
0327 %----------------------------------------------------------------------
0328 printf('\n');
0329 printf(' %s\n',repmat('=',1,80));
0330 printf(' WSPGL1  v.%s (%s)\n', REVISION, DATE);
0331 printf(' %s\n',repmat('=',1,80));
0332 printf(' %-22s: %8i %4s'   ,'No. rows'          ,m       ,'');
0333 printf(' %-22s: %8i\n'     ,'No. columns'       ,n          );
0334 printf(' %-22s: %8.2e %4s' ,'Initial tau'       ,tau     ,'');
0335 printf(' %-22s: %8.2e\n'   ,'Two-norm of b'     ,bNorm      );
0336 printf(' %-22s: %8.2e %4s' ,'Optimality tol'    ,optTol  ,'');
0337 if singleTau
0338    printf(' %-22s: %8.2e\n'  ,'Target one-norm of x'  ,tau       );
0339 else
0340    printf(' %-22s: %8.2e\n','Target objective'  ,sigma      );
0341 end
0342 printf(' %-22s: %8.2e %4s' ,'Basis pursuit tol' ,bpTol   ,'');
0343 printf(' %-22s: %8i\n'     ,'Maximum iterations',maxIts     );
0344 printf('\n');
0345 if singleTau
0346    logB = ' %5i  %13.7e  %13.7e  %9.2e  %6.1f  %6i  %6i';
0347    logH = ' %5s  %13s  %13s  %9s  %6s  %6s  %6s\n';
0348    printf(logH,'Iter','Objective','Relative Gap','gNorm','stepG','nnzX','nnzG');
0349 else
0350    logB = ' %5i  %13.7e  %13.7e  %9.2e  %9.3e  %6.1f  %6i  %6i';
0351    logH = ' %5s  %13s  %13s  %9s  %9s  %6s  %6s  %6s  %13s\n';
0352    printf(logH,'Iter','Objective','Relative Gap','Rel Error',...
0353           'gNorm','stepG','nnzX','nnzG','tau');
0354 end    
0355     
0356 % Project the starting point and evaluate function and gradient.
0357 x         = project(x,tau);
0358 r         = b - Aprod(x,1);  % r = b - Ax
0359 g         =   - Aprod(r,2);  % g = -A'r
0360 f         = r'*r / 2; 
0361 
0362 % Required for nonmonotone strategy.
0363 lastFv(1) = f;
0364 fBest     = f;
0365 xBest     = x;
0366 fOld      = f;
0367 
0368 % Compute projected gradient direction and initial steplength.
0369 dx     = project(x - g, tau) - x;
0370 dxNorm = norm(dx,inf);
0371 if dxNorm < (1 / stepMax)
0372    gStep = stepMax;
0373 else
0374    gStep = min( stepMax, max(stepMin, 1/dxNorm) );
0375 end
0376 
0377 %{ WSPGL1 _ Hassan Mansour
0378     weights_old = weights;
0379 %}
0380 
0381 
0382 %----------------------------------------------------------------------
0383 % MAIN LOOP.
0384 %----------------------------------------------------------------------
0385 while 1
0386     
0387     %------------------------------------------------------------------
0388     % Test exit conditions.
0389     %------------------------------------------------------------------
0390 
0391     % Compute quantities needed for log and exit conditions.
0392     gNorm   = options.dual_norm(-g,weights);
0393     rNorm   = norm(r, 2);
0394     gap     = r'*(r-b) + tau*gNorm;
0395     rGap    = abs(gap) / max(1,f);
0396     aError1 = rNorm - sigma;
0397     aError2 = f - sigma^2 / 2;
0398     rError1 = abs(aError1) / max(1,rNorm);
0399     rError2 = abs(aError2) / max(1,f);
0400 
0401     % Count number of consecutive iterations with identical support.
0402     nnzOld = nnzIdx;
0403     [nnzX,nnzG,nnzIdx,nnzDiff] = activeVars(x,g,nnzIdx,options);
0404 
0405     if nnzDiff
0406        nnzIter = 0;
0407     else
0408        nnzIter = nnzIter + 1;
0409        if nnzIter >= activeSetIt, stat=EXIT_ACTIVE_SET; end
0410     end
0411     
0412     % Single tau: Check if we're optimal.
0413     % The 2nd condition is there to guard against large tau.
0414     if singleTau
0415        if rGap <= optTol || rNorm < optTol*bNorm
0416           stat  = EXIT_OPTIMAL;
0417        end
0418  
0419     % Multiple tau: Check if found root and/or if tau needs updating.
0420     else
0421        
0422        if rGap <= max(optTol, rError2) || rError1 <= optTol
0423           % The problem is nearly optimal for the current tau.
0424           % Check optimality of the current root.
0425           test1 = rNorm       <=   bpTol * bNorm;
0426           test2 = gNorm       <=   bpTol * rNorm;
0427           test3 = rError1     <=  optTol;
0428           test4 = rNorm       <=  sigma;
0429           
0430           if test4, stat=EXIT_SUBOPTIMAL_BP;end % Found suboptimal BP sol.
0431           if test3, stat=EXIT_ROOT_FOUND;   end % Found approx root.
0432           if test2, stat=EXIT_BPSOL2_FOUND; end % Gradient zero -> BP sol.
0433           if test1, stat=EXIT_BPSOL1_FOUND; end % Resid minim'zd -> BP sol.
0434        end
0435 
0436        testRelChange1 = (abs(f - fOld) <= decTol * f);
0437        testRelChange2 = (abs(f - fOld) <= 1e-1 * f * (abs(rNorm - sigma)));
0438        testUpdateTau  = ((testRelChange1 && rNorm >  2 * sigma) || ...
0439                          (testRelChange2 && rNorm <= 2 * sigma)) && ...
0440                          ~stat && ~testUpdateTau;
0441        
0442        if testUpdateTau
0443            
0444           %{ WSPGL1 _ Hassan Mansour
0445           if ~isempty(x)
0446 %               omega = 0.3;
0447               [Cx, idx] = sort(abs(x), 'descend');
0448               ratCx = sqrt(cumsum(Cx.^2))/norm(Cx);
0449               k = min([round(0.5*length(b)/log(n/length(b))), find(ratCx >= 0.9999,1),round(length(b)/2-1)]);
0450 %               k = round(length(b)/log(n/length(b)));
0451               Tsupp = idx(1:k);
0452               
0453               weights = ones(n,1);
0454               weights(Tsupp) = omega;
0455               weights = weights.*weights_old;
0456               
0457               %update tau and gnorm
0458               tau = options.primal_norm(x,weights);
0459               gNorm   = options.dual_norm(g,weights);
0460               
0461               %update history
0462           end
0463           %}
0464           
0465           % Update tau.
0466           tauOld   = tau;
0467           tau      = max(0,tau + (rNorm * aError1) / gNorm);
0468           nNewton  = nNewton + 1;
0469           printTau = abs(tauOld - tau) >= 1e-6 * tau; % For log only.
0470           if tau < tauOld
0471              % The one-norm ball has decreased.  Need to make sure that the
0472              % next iterate if feasible, which we do by projecting it.
0473              x = project(x,tau);
0474           end
0475        end
0476     end
0477 
0478     % Too many its and not converged.
0479     if ~stat  &&  iter >= maxIts
0480         stat = EXIT_ITERATIONS;
0481     end
0482 
0483     %------------------------------------------------------------------
0484     % Print log, update history and act on exit conditions.
0485     %------------------------------------------------------------------
0486     if logLevel >= 2 || singleTau || printTau || iter == 0 || stat
0487        tauFlag = '              '; subFlag = '';
0488        if printTau, tauFlag = sprintf(' %13.7e',tau);   end
0489        if subspace, subFlag = sprintf(' S %2i',itnLSQR); end
0490        if singleTau
0491           printf(logB,iter,rNorm,rGap,gNorm,log10(stepG),nnzX,nnzG);
0492           if subspace
0493              printf('  %s',subFlag);
0494           end
0495        else
0496           printf(logB,iter,rNorm,rGap,rError1,gNorm,log10(stepG),nnzX,nnzG);
0497           if printTau || subspace
0498              printf(' %s',[tauFlag subFlag]);
0499           end
0500        end
0501        printf('\n');
0502     end
0503     printTau = false;
0504     subspace = false;
0505     
0506     % Update history info
0507     xNorm1(iter+1) = options.primal_norm(x,weights);
0508     rNorm2(iter+1) = rNorm;
0509     lambda(iter+1) = gNorm;
0510     
0511     %{WSPGL1 _ Hassan Mansour
0512     if testUpdateTau
0513        xNorm1_1 = [xNorm1_1; xNorm1(iter+1)];
0514        rNorm2_1 = [rNorm2_1; rNorm];
0515        lambda_1 = [lambda_1; gNorm];
0516     end
0517     %}
0518     
0519     if stat, break; end % Act on exit conditions.
0520         
0521     %==================================================================
0522     % Iterations begin here.
0523     %==================================================================
0524     iter = iter + 1;
0525     xOld = x;  fOld = f;  gOld = g;  rOld = r;
0526 
0527     try
0528        %---------------------------------------------------------------
0529        % Projected gradient step and linesearch.
0530        %---------------------------------------------------------------
0531        [f,x,r,nLine,stepG,lnErr] = ...
0532            spgLineCurvy(x,gStep*g,max(lastFv),@Aprod,b,@project,tau);
0533        nLineTot = nLineTot + nLine;
0534        if lnErr
0535           %  Projected backtrack failed. Retry with feasible dir'n linesearch.
0536           x    = xOld;
0537           dx   = project(x - gStep*g, tau) - x;
0538           gtd  = g'*dx;
0539           [f,x,r,nLine,lnErr] = spgLine(f,x,dx,gtd,max(lastFv),@Aprod,b);
0540           nLineTot = nLineTot + nLine;
0541        end
0542        if lnErr
0543        %  Failed again.  Revert to previous iterates and damp max BB step.
0544           if maxLineErrors <= 0
0545              stat = EXIT_LINE_ERROR;
0546           else
0547              stepMax = stepMax / 10;
0548              printf(['W: Linesearch failed with error %i. '...
0549                      'Damping max BB scaling to %6.1e.\n'],lnErr,stepMax);
0550              maxLineErrors = maxLineErrors - 1;
0551           end
0552        end
0553 
0554        %---------------------------------------------------------------
0555        % Subspace minimization (only if active-set change is small).
0556        %---------------------------------------------------------------
0557        doSubspaceMin = false;
0558        if subspaceMin
0559           g = - Aprod(r,2);
0560           [nnzX,nnzG,nnzIdx,nnzDiff] = activeVars(x,g,nnzOld,options);
0561           if ~nnzDiff
0562               if nnzX == nnzG, itnMaxLSQR = 20;
0563               else             itnMaxLSQR = 5;
0564               end
0565               nnzIdx = abs(x) >= optTol; 
0566               doSubspaceMin = true;
0567           end
0568        end
0569 
0570        if doSubspaceMin
0571      
0572           % LSQR parameters
0573           damp       = 1e-5;
0574           aTol       = 1e-1;
0575           bTol       = 1e-1;
0576           conLim     = 1e12;
0577           showLSQR   = 0;
0578        
0579           ebar   = sign(x(nnzIdx));
0580           nebar  = length(ebar);
0581           Sprod  = @(y,mode)LSQRprod(@Aprod,nnzIdx,ebar,n,y,mode);
0582        
0583           [dxbar, istop, itnLSQR] = ...
0584              lsqr(m,nebar,Sprod,r,damp,aTol,bTol,conLim,itnMaxLSQR,showLSQR);
0585               
0586           itnTotLSQR = itnTotLSQR + itnLSQR;
0587        
0588           if istop ~= 4  % LSQR iterations successful. Take the subspace step.
0589              % Push dx back into full space:  dx = Z dx.
0590              dx = zeros(n,1);
0591              dx(nnzIdx) = dxbar - (1/nebar)*(ebar'*dxbar)*dxbar;
0592 
0593              % Find largest step to a change in sign.
0594              block1 = nnzIdx  &  x < 0  &  dx > +pivTol;
0595              block2 = nnzIdx  &  x > 0  &  dx < -pivTol;
0596              alpha1 = Inf; alpha2 = Inf;
0597              if any(block1), alpha1 = min(-x(block1) ./ dx(block1)); end
0598              if any(block2), alpha2 = min(-x(block2) ./ dx(block2)); end
0599              alpha = min([1  alpha1  alpha2]);
0600              ensure(alpha >= 0);
0601              ensure(ebar'*dx(nnzIdx) <= optTol);          
0602           
0603              % Update variables.
0604              x    = x + alpha*dx;
0605              r    = b - Aprod(x,1);
0606              f    = r'*r / 2;
0607              subspace = true;
0608           end
0609        end
0610        
0611        ensure(options.primal_norm(x,weights) <= tau+optTol);
0612 
0613        %---------------------------------------------------------------
0614        % Update gradient and compute new Barzilai-Borwein scaling.
0615        %---------------------------------------------------------------
0616        g    = - Aprod(r,2);
0617        s    = x - xOld;
0618        y    = g - gOld;
0619        sts  = s'*s;
0620        sty  = s'*y;
0621        if   sty <= 0,  gStep = stepMax;
0622        else            gStep = min( stepMax, max(stepMin, sts/sty) );
0623        end
0624        
0625     catch % Detect matrix-vector multiply limit error
0626        err = lasterror;
0627        if strcmp(err.identifier,'SPGL1:MaximumMatvec')
0628          stat = EXIT_MATVEC_LIMIT;
0629          iter = iter - 1;
0630          x = xOld;  f = fOld;  g = gOld;  r = rOld;
0631          break;
0632        else
0633          rethrow(err);
0634        end
0635     end
0636 
0637     %------------------------------------------------------------------
0638     % Update function history.
0639     %------------------------------------------------------------------
0640     if singleTau || f > sigma^2 / 2 % Don't update if superoptimal.
0641        lastFv(mod(iter,nPrevVals)+1) = f;
0642        if fBest > f
0643           fBest = f;
0644           xBest = x;
0645        end
0646     end
0647 
0648 end % while 1
0649 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
0650 
0651 % Restore best solution (only if solving single problem).
0652 if singleTau && f > fBest
0653    rNorm = sqrt(2*fBest);
0654    printf('\n Restoring best iterate to objective %13.7e\n',rNorm);
0655    x = xBest;
0656    r = b - Aprod(x,1);
0657    g =   - Aprod(r,2);
0658    gNorm = options.dual_norm(g,weights);
0659    rNorm = norm(r,  2);
0660 end
0661 
0662 % Final cleanup before exit.
0663 info.tau         = tau;
0664 info.rNorm       = rNorm;
0665 info.rGap        = rGap;
0666 info.gNorm       = gNorm;
0667 info.rGap        = rGap;
0668 info.stat        = stat;
0669 info.iter        = iter;
0670 info.nProdA      = nProdA;
0671 info.nProdAt     = nProdAt;
0672 info.nNewton     = nNewton;
0673 info.timeProject = timeProject;
0674 info.timeMatProd = timeMatProd;
0675 info.itnLSQR     = itnTotLSQR;
0676 info.options     = options;
0677 info.timeTotal   = toc;
0678 
0679 info.xNorm1      = xNorm1(1:iter);
0680 info.rNorm2      = rNorm2(1:iter);
0681 info.lambda      = lambda(1:iter);
0682 
0683 %{WSPGL1 _ Hassan Mansour
0684 info.xNorm1_1      = xNorm1_1;
0685 info.rNorm2_1      = rNorm2_1;
0686 info.lambda_1      = lambda;
0687 %}
0688 
0689 % Print final output.
0690 switch (stat)
0691    case EXIT_OPTIMAL
0692       printf('\n EXIT -- Optimal solution found\n')
0693    case EXIT_ITERATIONS
0694       printf('\n ERROR EXIT -- Too many iterations\n');
0695    case EXIT_ROOT_FOUND
0696       printf('\n EXIT -- Found a root\n');
0697    case {EXIT_BPSOL1_FOUND, EXIT_BPSOL2_FOUND}
0698       printf('\n EXIT -- Found a BP solution\n');
0699    case EXIT_LINE_ERROR
0700       printf('\n ERROR EXIT -- Linesearch error (%i)\n',lnErr);
0701    case EXIT_SUBOPTIMAL_BP
0702       printf('\n EXIT -- Found a suboptimal BP solution\n');
0703    case EXIT_MATVEC_LIMIT
0704       printf('\n EXIT -- Maximum matrix-vector operations reached\n');
0705    case EXIT_ACTIVE_SET
0706       printf('\n EXIT -- Found a possible active set\n');
0707    otherwise
0708       error('Unknown termination condition\n');
0709 end
0710 printf('\n');
0711 printf(' %-20s:  %6i %6s %-20s:  %6.1f\n',...
0712    'Products with A',nProdA,'','Total time   (secs)',info.timeTotal);
0713 printf(' %-20s:  %6i %6s %-20s:  %6.1f\n',...
0714    'Products with A''',nProdAt,'','Project time (secs)',timeProject);
0715 printf(' %-20s:  %6i %6s %-20s:  %6.1f\n',...
0716    'Newton iterations',nNewton,'','Mat-vec time (secs)',timeMatProd);
0717 printf(' %-20s:  %6i %6s %-20s:  %6i\n', ...
0718    'Line search its',nLineTot,'','Subspace iterations',itnTotLSQR);
0719 printf('\n');
0720 
0721 
0722 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
0723 % NESTED FUNCTIONS.  These share some vars with workspace above.
0724 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
0725     
0726 function z = Aprod(x,mode)
0727    if (nProdA + nProdAt >= maxMatvec)
0728      error('SPGL1:MaximumMatvec','');
0729    end
0730      
0731    tStart = toc;
0732    if mode == 1
0733       nProdA = nProdA + 1;
0734       if   explicit, z = A*x;
0735       else           z = A(x,1);
0736       end
0737    elseif mode == 2
0738       nProdAt = nProdAt + 1;
0739       if   explicit, z = A'*x;
0740       else           z = A(x,2);
0741       end
0742    else
0743       error('Wrong mode!');
0744    end
0745    timeMatProd = timeMatProd + (toc - tStart);
0746 end % function Aprod
0747 
0748 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
0749 
0750 function printf(varargin)
0751   if logLevel > 0
0752      fprintf(fid,varargin{:});
0753   end
0754 end % function printf
0755 
0756 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
0757 
0758 function x = project(x, tau)
0759    tStart      = toc;
0760 
0761    x = options.project(x,weights,tau);
0762    
0763    timeProject = timeProject + (toc - tStart);
0764 end % function project
0765 
0766 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
0767 % End of nested functions.
0768 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
0769 
0770 end % function spg
0771 
0772 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
0773 % PRIVATE FUNCTIONS.
0774 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
0775 
0776 function [nnzX,nnzG,nnzIdx,nnzDiff] = activeVars(x,g,nnzIdx,options)
0777 % Find the current active set.
0778 % nnzX    is the number of nonzero x.
0779 % nnzG    is the number of elements in nnzIdx.
0780 % nnzIdx  is a vector of primal/dual indicators.
0781 % nnzDiff is the no. of elements that changed in the support.
0782   xTol    = min(.1,10*options.optTol);
0783   gTol    = min(.1,10*options.optTol);
0784   gNorm   = options.dual_norm(g,options.weights);
0785   nnzOld  = nnzIdx;
0786 
0787   % Reduced costs for postive & negative parts of x.
0788   z1 = gNorm + g;
0789   z2 = gNorm - g;
0790 
0791   % Primal/dual based indicators.
0792   xPos    = x >  xTol  &  z1 < gTol; %g < gTol;%
0793   xNeg    = x < -xTol  &  z2 < gTol; %g > gTol;%
0794   nnzIdx  = xPos | xNeg;
0795 
0796   % Count is based on simple primal indicator.
0797   nnzX    = sum(abs(x) >= xTol);
0798   nnzG    = sum(nnzIdx);
0799   
0800   if isempty(nnzOld)
0801      nnzDiff = inf;
0802   else
0803      nnzDiff = sum(nnzIdx ~= nnzOld);
0804   end
0805   
0806 end % function spgActiveVars
0807 
0808 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
0809 
0810 function z = LSQRprod(Aprod,nnzIdx,ebar,n,dx,mode)
0811 % Matrix multiplication for subspace minimization.
0812 % Only called by LSQR.
0813   nbar = length(ebar);
0814    if mode == 1
0815       y = zeros(n,1);
0816       y(nnzIdx) = dx - (1/nbar)*(ebar'*dx)*ebar; % y(nnzIdx) = Z*dx
0817       z = Aprod(y,1);                            % z = S Z dx
0818    else
0819       y = Aprod(dx,2);
0820       z = y(nnzIdx) - (1/nbar)*(ebar'*y(nnzIdx))*ebar;
0821    end
0822 end
0823 
0824 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
0825 
0826 function [fNew,xNew,rNew,iter,err] = spgLine(f,x,d,gtd,fMax,Aprod,b)
0827 % Nonmonotone linesearch.
0828 
0829 EXIT_CONVERGED  = 0;
0830 EXIT_ITERATIONS = 1;
0831 maxIts = 10;
0832 step   = 1;
0833 iter   = 0;
0834 gamma  = 1e-4;
0835 gtd    = -abs(gtd); % 03 Aug 07: If gtd is complex,
0836                     % then should be looking at -abs(gtd).
0837 while 1
0838 
0839     % Evaluate trial point and function value.
0840     xNew = x + step*d;
0841     rNew = b - Aprod(xNew,1);
0842     fNew = rNew'*rNew / 2;
0843 
0844     % Check exit conditions.
0845     if fNew < fMax + gamma*step*gtd  % Sufficient descent condition.
0846        err = EXIT_CONVERGED;
0847        break
0848     elseif  iter >= maxIts           % Too many linesearch iterations.
0849        err = EXIT_ITERATIONS;
0850        break
0851     end
0852     
0853     % New linesearch iteration.
0854     iter = iter + 1;
0855     
0856     % Safeguarded quadratic interpolation.
0857     if step <= 0.1
0858        step  = step / 2;
0859     else
0860        tmp = (-gtd*step^2) / (2*(fNew-f-step*gtd));
0861        if tmp < 0.1 || tmp > 0.9*step || isnan(tmp)
0862           tmp = step / 2;
0863        end
0864        step = tmp;
0865     end
0866     
0867 end % while 1
0868 
0869 end % function spgLine
0870 
0871 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
0872 
0873 function [fNew,xNew,rNew,iter,step,err] = ...
0874     spgLineCurvy(x,g,fMax,Aprod,b,project,tau)
0875 % Projected backtracking linesearch.
0876 % On entry,
0877 % g  is the (possibly scaled) steepest descent direction.
0878 
0879 EXIT_CONVERGED  = 0;
0880 EXIT_ITERATIONS = 1;
0881 EXIT_NODESCENT  = 2;
0882 gamma  = 1e-4;
0883 maxIts = 10;
0884 step   =  1;
0885 sNorm  =  0;
0886 scale  =  1;      % Safeguard scaling.  (See below.)
0887 nSafe  =  0;      % No. of safeguarding steps.
0888 iter   =  0;
0889 debug  =  false;  % Set to true to enable log.
0890 n      =  length(x);
0891 
0892 if debug
0893    fprintf(' %5s  %13s  %13s  %13s  %8s\n',...
0894            'LSits','fNew','step','gts','scale');  
0895 end
0896    
0897 while 1
0898 
0899     % Evaluate trial point and function value.
0900     xNew     = project(x - step*scale*g, tau);
0901     rNew     = b - Aprod(xNew,1);
0902     fNew     = rNew'*rNew / 2;
0903     s        = xNew - x;
0904     gts      = scale * g' * s;
0905     if gts >= 0 % Should we check real and complex parts individually?
0906        err = EXIT_NODESCENT;
0907        break
0908     end
0909 
0910     if debug
0911        fprintf(' LS %2i  %13.7e  %13.7e  %13.6e  %8.1e\n',...
0912                iter,fNew,step,gts,scale);
0913     end
0914     
0915     % 03 Aug 07: If gts is complex, then should be looking at -abs(gts).
0916     if fNew < fMax - gamma*step*abs(gts)  % Sufficient descent condition.
0917        err = EXIT_CONVERGED;
0918        break
0919     elseif iter >= maxIts                 % Too many linesearch iterations.
0920        err = EXIT_ITERATIONS;
0921        break
0922     end
0923     
0924     % New linesearch iteration.
0925     iter = iter + 1;
0926     step = step / 2;
0927 
0928     % Safeguard: If stepMax is huge, then even damped search
0929     % directions can give exactly the same point after projection.  If
0930     % we observe this in adjacent iterations, we drastically damp the
0931     % next search direction.
0932     % 31 May 07: Damp consecutive safeguarding steps.
0933     sNormOld  = sNorm;
0934     sNorm     = norm(s) / sqrt(n);
0935     %   if sNorm >= sNormOld
0936     if abs(sNorm - sNormOld) <= 1e-6 * sNorm
0937        gNorm = norm(g) / sqrt(n);
0938        scale = sNorm / gNorm / (2^nSafe);
0939        nSafe = nSafe + 1;
0940     end
0941     
0942 end % while 1
0943 
0944 end % function spgLineCurvy

Generated on Mon 10-Jun-2013 23:03:23 by m2html © 2005