`
sunwinner
  • 浏览: 197399 次
  • 性别: Icon_minigender_1
  • 来自: 上海
社区版块
存档分类
最新评论

Clsssic MapReduce (MapReduce 1) - Task execution

 
阅读更多
  • First, tasktracker localizes the job jar by copying it from the shared filesystem to the its filesystem. It also copies any files needed from the distributed cache by the application to the local disk.
  // In TaskTracker.java
  /**
   * Localize the job on this tasktracker. Specifically
   * <ul>
   * <li>Cleanup and create job directories on all disks</li>
   * <li>Download the credentials file</li>
   * <li>Download the job config file job.xml from the FS</li>
   * <li>Invokes the {@link TaskController} to do the rest of the job 
   * initialization</li>
   * </ul>
   *
   * @param t task whose job has to be localized on this TT
   * @param rjob the {@link RunningJob}
   * @param ttAddr the tasktracker's RPC address
   * @return the path to the job configuration to be used for all the tasks
   *         of this job as a starting point.
   * @throws IOException
   */
  Path initializeJob(final Task t, final RunningJob rjob, 
      final InetSocketAddress ttAddr)
  throws IOException, InterruptedException {
    final JobID jobId = t.getJobID();

    final Path jobFile = new Path(t.getJobFile());
    final String userName = t.getUser();
    final Configuration conf = getJobConf();

    // save local copy of JobToken file
    final String localJobTokenFile = localizeJobTokenFile(t.getUser(), jobId);
    synchronized (rjob) {
      rjob.ugi = UserGroupInformation.createRemoteUser(t.getUser());

      Credentials ts = TokenCache.loadTokens(localJobTokenFile, conf);
      Token<JobTokenIdentifier> jt = TokenCache.getJobToken(ts);
      if (jt != null) { //could be null in the case of some unit tests
        getJobTokenSecretManager().addTokenForJob(jobId.toString(), jt);
      }
      for (Token<? extends TokenIdentifier> token : ts.getAllTokens()) {
        rjob.ugi.addToken(token);
      }
    }

    FileSystem userFs = getFS(jobFile, jobId, conf);

    // Download the job.xml for this job from the system FS
    final Path localJobFile =
      localizeJobConfFile(new Path(t.getJobFile()), userName, userFs, jobId);

    /**
      * Now initialize the job via task-controller to do the rest of the
      * job-init. Do this within a doAs since the public distributed cache 
      * is also set up here.
      * To support potential authenticated HDFS accesses, we need the tokens
      */
    rjob.ugi.doAs(new PrivilegedExceptionAction<Object>() {
      public Object run() throws IOException, InterruptedException {
        try {
          final JobConf localJobConf = new JobConf(localJobFile);
          // Setup the public distributed cache
          TaskDistributedCacheManager taskDistributedCacheManager =
            getTrackerDistributedCacheManager()
           .newTaskDistributedCacheManager(jobId, localJobConf);
          rjob.distCacheMgr = taskDistributedCacheManager;
          taskDistributedCacheManager.setupCache(localJobConf,
            TaskTracker.getPublicDistributedCacheDir(),
            TaskTracker.getPrivateDistributedCacheDir(userName));

          // Set some config values
          localJobConf.set(JobConf.MAPRED_LOCAL_DIR_PROPERTY,
              getJobConf().get(JobConf.MAPRED_LOCAL_DIR_PROPERTY));
          if (conf.get("slave.host.name") != null) {
            localJobConf.set("slave.host.name", conf.get("slave.host.name"));
          }
          resetNumTasksPerJvm(localJobConf);
          localJobConf.setUser(t.getUser());

          // write back the config (this config will have the updates that the
          // distributed cache manager makes as well)
          JobLocalizer.writeLocalJobFile(localJobFile, localJobConf);
          taskController.initializeJob(t.getUser(), jobId.toString(), 
              new Path(localJobTokenFile), localJobFile, TaskTracker.this,
              ttAddr);
        } catch (IOException e) {
          LOG.warn("Exception while localization " + 
              StringUtils.stringifyException(e));
          throw e;
        } catch (InterruptedException ie) {
          LOG.warn("Exception while localization " + 
              StringUtils.stringifyException(ie));
          throw ie;
        }
        return null;
      }
    });
    //search for the conf that the initializeJob created
    //need to look up certain configs from this conf, like
    //the distributed cache, profiling, etc. ones
    Path initializedConf = lDirAlloc.getLocalPathToRead(getLocalJobConfFile(
           userName, jobId.toString()), getJobConf());
    return initializedConf;
  }
  •  Second, It create a local working directory for the task then un-jar the contents of jars.
  // In DefaultTaskController.java
  /**
   * This routine initializes the local file system for running a job.
   * Details:
   * <ul>
   * <li>Copies the credentials file from the TaskTracker's private space to
   * the job's private space </li>
   * <li>Creates the job work directory and set 
   * {@link TaskTracker#JOB_LOCAL_DIR} in the configuration</li>
   * <li>Downloads the job.jar, unjars it, and updates the configuration to 
   * reflect the localized path of the job.jar</li>
   * <li>Creates a base JobConf in the job's private space</li>
   * <li>Sets up the distributed cache</li>
   * <li>Sets up the user logs directory for the job</li>
   * </ul>
   * This method must be invoked in the access control context of the job owner 
   * user. This is because the distributed cache is also setup here and the 
   * access to the hdfs files requires authentication tokens in case where 
   * security is enabled.
   * @param user the user in question (the job owner)
   * @param jobid the ID of the job in question
   * @param credentials the path to the credentials file that the TaskTracker
   * downloaded
   * @param jobConf the path to the job configuration file that the TaskTracker
   * downloaded
   * @param taskTracker the connection to the task tracker
   * @throws IOException
   * @throws InterruptedException
   */
  @Override
  public void initializeJob(String user, String jobid, 
                            Path credentials, Path jobConf, 
                            TaskUmbilicalProtocol taskTracker,
                            InetSocketAddress ttAddr
                            ) throws IOException, InterruptedException {
    final LocalDirAllocator lDirAlloc = allocator;
    FileSystem localFs = FileSystem.getLocal(getConf());
    JobLocalizer localizer = new JobLocalizer((JobConf)getConf(), user, jobid);
    localizer.createLocalDirs();
    localizer.createUserDirs();
    localizer.createJobDirs();

    JobConf jConf = new JobConf(jobConf);
    localizer.createWorkDir(jConf);
    //copy the credential file
    Path localJobTokenFile = lDirAlloc.getLocalPathForWrite(
        TaskTracker.getLocalJobTokenFile(user, jobid), getConf());
    FileUtil.copy(
        localFs, credentials, localFs, localJobTokenFile, false, getConf());


    //setup the user logs dir
    localizer.initializeJobLogDir();

    // Download the job.jar for this job from the system FS
    // setup the distributed cache
    // write job acls
    // write localized config
    localizer.localizeJobFiles(JobID.forName(jobid), jConf, localJobTokenFile, 
                               taskTracker);
  }

 

 

 

  // In JobLocalizer.java

  public void localizeJobFiles(JobID jobid, JobConf jConf,
      Path localJobTokenFile, TaskUmbilicalProtocol taskTracker)
      throws IOException, InterruptedException {
    localizeJobFiles(jobid, jConf,
        lDirAlloc.getLocalPathForWrite(JOBCONF, ttConf), localJobTokenFile,
        taskTracker);
  }

  public void localizeJobFiles(final JobID jobid, JobConf jConf,
      Path localJobFile, Path localJobTokenFile,
      final TaskUmbilicalProtocol taskTracker) 
  throws IOException, InterruptedException {
    // Download the job.jar for this job from the system FS
    localizeJobJarFile(jConf);

    jConf.set(JOB_LOCAL_CTXT, ttConf.get(JOB_LOCAL_CTXT));

    //update the config some more
    jConf.set(TokenCache.JOB_TOKENS_FILENAME, localJobTokenFile.toString());
    jConf.set(JobConf.MAPRED_LOCAL_DIR_PROPERTY, 
        ttConf.get(JobConf.MAPRED_LOCAL_DIR_PROPERTY));
    TaskTracker.resetNumTasksPerJvm(jConf);

    //setup the distributed cache
    final long[] sizes = downloadPrivateCache(jConf);
    if (sizes != null) {
      //the following doAs is required because the DefaultTaskController
      //calls the localizeJobFiles method in the context of the TaskTracker
      //process. The JVM authorization check would fail without this
      //doAs. In the LinuxTC case, this doesn't harm.
      UserGroupInformation ugi = 
        UserGroupInformation.createRemoteUser(jobid.toString());
      ugi.doAs(new PrivilegedExceptionAction<Object>() { 
        public Object run() throws IOException {
          taskTracker.updatePrivateDistributedCacheSizes(jobid, sizes);
          return null;
        }
      });
      
    }

    // Create job-acls.xml file in job userlog dir and write the needed
    // info for authorization of users for viewing task logs of this job.
    writeJobACLs(jConf, new Path(TaskLog.getJobDir(jobid).toURI().toString()));

    //write the updated jobConf file in the job directory
    JobLocalizer.writeLocalJobFile(localJobFile, jConf);
  }

  /**
   * Download the job jar file from FS to the local file system and unjar it.
   * Set the local jar file in the passed configuration.
   *
   * @param localJobConf
   * @throws IOException
   */
  private void localizeJobJarFile(JobConf localJobConf) throws IOException {
    // copy Jar file to the local FS and unjar it.
    String jarFile = localJobConf.getJar();
    FileStatus status = null;
    long jarFileSize = -1;
    if (jarFile != null) {
      Path jarFilePath = new Path(jarFile);
      FileSystem userFs = jarFilePath.getFileSystem(localJobConf);
      try {
        status = userFs.getFileStatus(jarFilePath);
        jarFileSize = status.getLen();
      } catch (FileNotFoundException fe) {
        jarFileSize = -1;
      }
      // Here we check for five times the size of jarFileSize to accommodate for
      // unjarring the jar file in the jars directory
      Path localJarFile =
        lDirAlloc.getLocalPathForWrite(JARDST, 5 * jarFileSize, ttConf);

      //Download job.jar
      userFs.copyToLocalFile(jarFilePath, localJarFile);
      localJobConf.setJar(localJarFile.toString());
      // Also un-jar the job.jar files. We un-jar it so that classes inside
      // sub-directories, for e.g., lib/, classes/ are available on class-path
      RunJar.unJar(new File(localJarFile.toString()),
          new File(localJarFile.getParent().toString()));
      FileUtil.chmod(localJarFile.getParent().toString(), "ugo+rx", true);
    }
  }
  •  Third, it create an instance of TaskRunner to run the task
    /**
     * Kick off the task execution
     */
    public synchronized void launchTask(RunningJob rjob) throws IOException {
      if (this.taskStatus.getRunState() == TaskStatus.State.UNASSIGNED ||
          this.taskStatus.getRunState() == TaskStatus.State.FAILED_UNCLEAN ||
          this.taskStatus.getRunState() == TaskStatus.State.KILLED_UNCLEAN) {
        localizeTask(task);
        if (this.taskStatus.getRunState() == TaskStatus.State.UNASSIGNED) {
          this.taskStatus.setRunState(TaskStatus.State.RUNNING);
        }
        setTaskRunner(task.createRunner(TaskTracker.this, this, rjob));
        this.runner.start();
        long now = System.currentTimeMillis();
        this.taskStatus.setStartTime(now);
        this.lastProgressReport = now;
      } else {
        LOG.info("Not launching task: " + task.getTaskID() + 
            " since it's state is " + this.taskStatus.getRunState());
      }
    }

 

TaskRunner launches a new JVM to run each task, so that any bugs in the user-defined map and reduce functions don't affect the tasktracker. However, it's possible to reuse the JVM between tasks.

  // In TaskRunner.java
  @Override
  public final void run() {
    String errorInfo = "Child Error";
    try {
      
      //before preparing the job localize 
      //all the archives
      TaskAttemptID taskid = t.getTaskID();
      final LocalDirAllocator lDirAlloc = new LocalDirAllocator("mapred.local.dir");
      //simply get the location of the workDir and pass it to the child. The
      //child will do the actual dir creation
      final File workDir =
      new File(new Path(localdirs[rand.nextInt(localdirs.length)], 
          TaskTracker.getTaskWorkDir(t.getUser(), taskid.getJobID().toString(), 
          taskid.toString(),
          t.isTaskCleanupTask())).toString());
      
      String user = tip.getUGI().getUserName();
      
      // Set up the child task's configuration. After this call, no localization
      // of files should happen in the TaskTracker's process space. Any changes to
      // the conf object after this will NOT be reflected to the child.
      // setupChildTaskConfiguration(lDirAlloc);

      if (!prepare()) {
        return;
      }
      
      // Accumulates class paths for child.
      List<String> classPaths = getClassPaths(conf, workDir,
                                              taskDistributedCacheManager);

      long logSize = TaskLog.getTaskLogLength(conf);
      
      //  Build exec child JVM args.
      Vector<String> vargs = getVMArgs(taskid, workDir, classPaths, logSize);
      
      tracker.addToMemoryManager(t.getTaskID(), t.isMapTask(), conf);

      // set memory limit using ulimit if feasible and necessary ...
      String setup = getVMSetupCmd();
      // Set up the redirection of the task's stdout and stderr streams
      File[] logFiles = prepareLogFiles(taskid, t.isTaskCleanupTask());
      File stdout = logFiles[0];
      File stderr = logFiles[1];
      tracker.getTaskTrackerInstrumentation().reportTaskLaunch(taskid, stdout,
                 stderr);
      
      Map<String, String> env = new HashMap<String, String>();
      errorInfo = getVMEnvironment(errorInfo, user, workDir, conf, env, taskid,
                                   logSize);
      
      // flatten the env as a set of export commands
      List <String> setupCmds = new ArrayList<String>();
      for(Entry<String, String> entry : env.entrySet()) {
        StringBuffer sb = new StringBuffer();
        sb.append("export ");
        sb.append(entry.getKey());
        sb.append("=\"");
        sb.append(entry.getValue());
        sb.append("\"");
        setupCmds.add(sb.toString());
      }
      setupCmds.add(setup);
      
      launchJvmAndWait(setupCmds, vargs, stdout, stderr, logSize, workDir);
      tracker.getTaskTrackerInstrumentation().reportTaskEnd(t.getTaskID());
      if (exitCodeSet) {
        if (!killed && exitCode != 0) {
          if (exitCode == 65) {
            tracker.getTaskTrackerInstrumentation().taskFailedPing(t.getTaskID());
          }
          throw new IOException("Task process exit with nonzero status of " +
              exitCode + ".");
        }
      }
    } catch (FSError e) {
      LOG.fatal("FSError", e);
      try {
        tracker.fsErrorInternal(t.getTaskID(), e.getMessage());
      } catch (IOException ie) {
        LOG.fatal(t.getTaskID()+" reporting FSError", ie);
      }
    } catch (Throwable throwable) {
      LOG.warn(t.getTaskID() + " : " + errorInfo, throwable);
      Throwable causeThrowable = new Throwable(errorInfo, throwable);
      ByteArrayOutputStream baos = new ByteArrayOutputStream();
      causeThrowable.printStackTrace(new PrintStream(baos));
      try {
        tracker.reportDiagnosticInfoInternal(t.getTaskID(), baos.toString());
      } catch (IOException e) {
        LOG.warn(t.getTaskID()+" Reporting Diagnostics", e);
      }
    } finally {
      
      // It is safe to call TaskTracker.TaskInProgress.reportTaskFinished with
      // *false* since the task has either
      // a) SUCCEEDED - which means commit has been done
      // b) FAILED - which means we do not need to commit
      tip.reportTaskFinished(false);
    }
  }

 

 

 

  // In LinuxTaskController
  @Override
  public void initializeJob(String user, String jobid, Path credentials,
                            Path jobConf, TaskUmbilicalProtocol taskTracker,
                            InetSocketAddress ttAddr
                            ) throws IOException {
    List<String> command = new ArrayList<String>(
      Arrays.asList(taskControllerExe, 
                    user,
                    localStorage.getDirsString(),
                    Integer.toString(Commands.INITIALIZE_JOB.getValue()),
                    jobid,
                    credentials.toUri().getPath().toString(),
                    jobConf.toUri().getPath().toString()));
    File jvm =                                  // use same jvm as parent
      new File(new File(System.getProperty("java.home"), "bin"), "java");
    command.add(jvm.toString());
    command.add("-classpath");
    command.add(System.getProperty("java.class.path"));
    command.add("-Dhadoop.log.dir=" + TaskLog.getBaseLogDir());
    command.add("-Dhadoop.root.logger=INFO,console");
    command.add("-Djava.library.path=" +
                System.getProperty("java.library.path"));
    command.add(JobLocalizer.class.getName());  // main of JobLocalizer
    command.add(user);
    command.add(jobid);
    // add the task tracker's reporting address
    command.add(ttAddr.getHostName());
    command.add(Integer.toString(ttAddr.getPort()));
    String[] commandArray = command.toArray(new String[0]);
    ShellCommandExecutor shExec = new ShellCommandExecutor(commandArray);
    if (LOG.isDebugEnabled()) {
      LOG.debug("initializeJob: " + Arrays.toString(commandArray));
    }
    try {
      shExec.execute();
      if (LOG.isDebugEnabled()) {
        logOutput(shExec.getOutput());
      }
    } catch (ExitCodeException e) {
      int exitCode = shExec.getExitCode();
      logOutput(shExec.getOutput());
      throw new IOException("Job initialization failed (" + exitCode + 
          ") with output: " + shExec.getOutput(), e);
    }
  }

 

// In JvmManager.java
public void launchJvm(TaskRunner t, JvmEnv env
                        ) throws IOException, InterruptedException {
    if (t.getTask().isMapTask()) {
      mapJvmManager.reapJvm(t, env);
    } else {
      reduceJvmManager.reapJvm(t, env);
    }
  }

  

 

The child process communicates with its parent through the umbilical interface, it informs the parent of the task's progress every few seconds until the task is complete. You can chech the run()method in MapTask or ReduceTask. After each phase complete, TaskTracker use the umnilical interface to report status update to its parent. (TaskTracker implements tthe interface TaskUmbilicalProtocol). 

 

/**
 * Licensed to the Apache Software Foundation (ASF) under one
 * or more contributor license agreements.  See the NOTICE file
 * distributed with this work for additional information
 * regarding copyright ownership.  The ASF licenses this file
 * to you under the Apache License, Version 2.0 (the
 * "License"); you may not use this file except in compliance
 * with the License.  You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package org.apache.hadoop.mapred;

import java.io.IOException;

import org.apache.hadoop.ipc.VersionedProtocol;
import org.apache.hadoop.mapred.JvmTask;
import org.apache.hadoop.mapreduce.security.token.JobTokenSelector;
import org.apache.hadoop.security.token.TokenInfo;

/** Protocol that task child process uses to contact its parent process.  The
 * parent is a daemon which which polls the central master for a new map or
 * reduce task and runs it as a child process.  All communication between child
 * and parent is via this protocol. */
@TokenInfo(JobTokenSelector.class)
public interface TaskUmbilicalProtocol extends VersionedProtocol {

  /** 
   * Changed the version to 2, since we have a new method getMapOutputs 
   * Changed version to 3 to have progress() return a boolean
   * Changed the version to 4, since we have replaced 
   *         TaskUmbilicalProtocol.progress(String, float, String, 
   *         org.apache.hadoop.mapred.TaskStatus.Phase, Counters) 
   *         with statusUpdate(String, TaskStatus)
   * 
   * Version 5 changed counters representation for HADOOP-2248
   * Version 6 changes the TaskStatus representation for HADOOP-2208
   * Version 7 changes the done api (via HADOOP-3140). It now expects whether
   *           or not the task's output needs to be promoted.
   * Version 8 changes {job|tip|task}id's to use their corresponding 
   * objects rather than strings.
   * Version 9 changes the counter representation for HADOOP-1915
   * Version 10 changed the TaskStatus format and added reportNextRecordRange
   *            for HADOOP-153
   * Version 11 Adds RPCs for task commit as part of HADOOP-3150
   * Version 12 getMapCompletionEvents() now also indicates if the events are 
   *            stale or not. Hence the return type is a class that 
   *            encapsulates the events and whether to reset events index.
   * Version 13 changed the getTask method signature for HADOOP-249
   * Version 14 changed the getTask method signature for HADOOP-4232
   * Version 15 Adds FAILED_UNCLEAN and KILLED_UNCLEAN states for HADOOP-4759
   * Version 16 Added numRequiredSlots to TaskStatus for MAPREDUCE-516
   * Version 17 Change in signature of getTask() for HADOOP-5488
   * Version 18 Added fatalError for child to communicate fatal errors to TT
   * Version 19 Added jvmContext to most method signatures for MAPREDUCE-2429
   * */

  public static final long versionID = 19L;
  
  /**
   * Called when a child task process starts, to get its task.
   * @param context the JvmContext of the JVM w.r.t the TaskTracker that
   *        launched it
   * @return Task object
   * @throws IOException 
   */
  JvmTask getTask(JvmContext context) throws IOException;

  /**
   * Report child's progress to parent.
   * 
   * @param taskId task-id of the child
   * @param taskStatus status of the child
   * @param jvmContext context the jvmContext running the task.
   * @throws IOException
   * @throws InterruptedException
   * @return True if the task is known
   */
  boolean statusUpdate(TaskAttemptID taskId, TaskStatus taskStatus,
      JvmContext jvmContext) throws IOException, InterruptedException;
  
  /** Report error messages back to parent.  Calls should be sparing, since all
   *  such messages are held in the job tracker.
   *  @param taskid the id of the task involved
   *  @param trace the text to report
   *  @param jvmContext context the jvmContext running the task.
   */
  void reportDiagnosticInfo(TaskAttemptID taskid, String trace,
      JvmContext jvmContext) throws IOException;
  
  /**
   * Report the record range which is going to process next by the Task.
   * @param taskid the id of the task involved
   * @param range the range of record sequence nos
   * @param jvmContext context the jvmContext running the task.
   * @throws IOException
   */
  void reportNextRecordRange(TaskAttemptID taskid, SortedRanges.Range range,
      JvmContext jvmContext) throws IOException;

  /** Periodically called by child to check if parent is still alive.
   * @param taskid the id of the task involved
   * @param jvmContext context the jvmContext running the task.
   * @return True if the task is known
   */
  boolean ping(TaskAttemptID taskid, JvmContext jvmContext) throws IOException;

  /** Report that the task is successfully completed.  Failure is assumed if
   * the task process exits without calling this.
   * @param taskid task's id
   * @param jvmContext context the jvmContext running the task.
   */
  void done(TaskAttemptID taskid, JvmContext jvmContext) throws IOException;
  
  /** 
   * Report that the task is complete, but its commit is pending.
   * 
   * @param taskId task's id
   * @param taskStatus status of the child
   * @param jvmContext context the jvmContext running the task.
   * @throws IOException
   */
  void commitPending(TaskAttemptID taskId, TaskStatus taskStatus,
      JvmContext jvmContext) throws IOException, InterruptedException;  

  /**
   * Polling to know whether the task can go-ahead with commit 
   * @param taskid
   * @param jvmContext context the jvmContext running the task.
   * @return true/false 
   * @throws IOException
   */
  boolean canCommit(TaskAttemptID taskid, JvmContext jvmContext) throws IOException;

  /** Report that a reduce-task couldn't shuffle map-outputs. */
  void shuffleError(TaskAttemptID taskId, String message, JvmContext jvmContext)
      throws IOException;
  
  /** Report that the task encounted a local filesystem error.*/
  void fsError(TaskAttemptID taskId, String message, JvmContext jvmContext)
      throws IOException;

  /** Report that the task encounted a fatal error.*/
  void fatalError(TaskAttemptID taskId, String message, JvmContext jvmContext)
      throws IOException;
  
  /** Called by a reduce task to get the map output locations for finished maps.
   * Returns an update centered around the map-task-completion-events. 
   * The update also piggybacks the information whether the events copy at the 
   * task-tracker has changed or not. This will trigger some action at the 
   * child-process.
   *
   * @param jobId the reducer job id
   * @param fromIndex the index starting from which the locations should be 
   * fetched
   * @param maxLocs the max number of locations to fetch
   * @param id The attempt id of the task that is trying to communicate
   * @return A {@link MapTaskCompletionEventsUpdate} 
   */
  MapTaskCompletionEventsUpdate getMapCompletionEvents(JobID jobId, 
                                                       int fromIndex, 
                                                       int maxLocs,
                                                       TaskAttemptID id,
                                                       JvmContext jvmContext) 
  throws IOException;

  /**
   * The job initializer needs to report the sizes of the archive
   * objects and directories in the private distributed cache.
   * @param jobId the job to update
   * @param sizes the array of sizes that were computed
   * @throws IOException
   */
  void updatePrivateDistributedCacheSizes(org.apache.hadoop.mapreduce.JobID jobId,
                                          long[] sizes) throws IOException;
}

 

 

 

 

 

 

分享到:
评论

相关推荐

    wheel-0.13.0-py2.py3-none-any.whl

    Python库是一组预先编写的代码模块,旨在帮助开发者实现特定的编程任务,无需从零开始编写代码。这些库可以包括各种功能,如数学运算、文件操作、数据分析和网络编程等。Python社区提供了大量的第三方库,如NumPy、Pandas和Requests,极大地丰富了Python的应用领域,从数据科学到Web开发。Python库的丰富性是Python成为最受欢迎的编程语言之一的关键原因之一。这些库不仅为初学者提供了快速入门的途径,而且为经验丰富的开发者提供了强大的工具,以高效率、高质量地完成复杂任务。例如,Matplotlib和Seaborn库在数据可视化领域内非常受欢迎,它们提供了广泛的工具和技术,可以创建高度定制化的图表和图形,帮助数据科学家和分析师在数据探索和结果展示中更有效地传达信息。

    三菱PLC例程源码ST反弹限位器焊机14

    三菱PLC例程源码ST反弹限位器焊机14本资源系百度网盘分享地址

    asp代码asp教师信息管理系统(源代码+论文)

    asp代码asp教师信息管理系统(源代码+论文)本资源系百度网盘分享地址

    tensorflow_serving_api_gpu-2.3.3-py2.py3-none-any.whl

    Python库是一组预先编写的代码模块,旨在帮助开发者实现特定的编程任务,无需从零开始编写代码。这些库可以包括各种功能,如数学运算、文件操作、数据分析和网络编程等。Python社区提供了大量的第三方库,如NumPy、Pandas和Requests,极大地丰富了Python的应用领域,从数据科学到Web开发。Python库的丰富性是Python成为最受欢迎的编程语言之一的关键原因之一。这些库不仅为初学者提供了快速入门的途径,而且为经验丰富的开发者提供了强大的工具,以高效率、高质量地完成复杂任务。例如,Matplotlib和Seaborn库在数据可视化领域内非常受欢迎,它们提供了广泛的工具和技术,可以创建高度定制化的图表和图形,帮助数据科学家和分析师在数据探索和结果展示中更有效地传达信息。

    tensorflow_serving_api-2.0.0-py2.py3-none-any.whl

    Python库是一组预先编写的代码模块,旨在帮助开发者实现特定的编程任务,无需从零开始编写代码。这些库可以包括各种功能,如数学运算、文件操作、数据分析和网络编程等。Python社区提供了大量的第三方库,如NumPy、Pandas和Requests,极大地丰富了Python的应用领域,从数据科学到Web开发。Python库的丰富性是Python成为最受欢迎的编程语言之一的关键原因之一。这些库不仅为初学者提供了快速入门的途径,而且为经验丰富的开发者提供了强大的工具,以高效率、高质量地完成复杂任务。例如,Matplotlib和Seaborn库在数据可视化领域内非常受欢迎,它们提供了广泛的工具和技术,可以创建高度定制化的图表和图形,帮助数据科学家和分析师在数据探索和结果展示中更有效地传达信息。

    tensorflow_model_remediation-0.1.6.tar.gz

    Python库是一组预先编写的代码模块,旨在帮助开发者实现特定的编程任务,无需从零开始编写代码。这些库可以包括各种功能,如数学运算、文件操作、数据分析和网络编程等。Python社区提供了大量的第三方库,如NumPy、Pandas和Requests,极大地丰富了Python的应用领域,从数据科学到Web开发。Python库的丰富性是Python成为最受欢迎的编程语言之一的关键原因之一。这些库不仅为初学者提供了快速入门的途径,而且为经验丰富的开发者提供了强大的工具,以高效率、高质量地完成复杂任务。例如,Matplotlib和Seaborn库在数据可视化领域内非常受欢迎,它们提供了广泛的工具和技术,可以创建高度定制化的图表和图形,帮助数据科学家和分析师在数据探索和结果展示中更有效地传达信息。

    三菱PLC例程源码PID程序

    三菱PLC例程源码PID程序本资源系百度网盘分享地址

    tensorflow_recommenders-0.5.0-py3-none-any.whl

    Python库是一组预先编写的代码模块,旨在帮助开发者实现特定的编程任务,无需从零开始编写代码。这些库可以包括各种功能,如数学运算、文件操作、数据分析和网络编程等。Python社区提供了大量的第三方库,如NumPy、Pandas和Requests,极大地丰富了Python的应用领域,从数据科学到Web开发。Python库的丰富性是Python成为最受欢迎的编程语言之一的关键原因之一。这些库不仅为初学者提供了快速入门的途径,而且为经验丰富的开发者提供了强大的工具,以高效率、高质量地完成复杂任务。例如,Matplotlib和Seaborn库在数据可视化领域内非常受欢迎,它们提供了广泛的工具和技术,可以创建高度定制化的图表和图形,帮助数据科学家和分析师在数据探索和结果展示中更有效地传达信息。

    基于ssm珠宝首饰交易平台.zip

    基于ssm珠宝首饰交易平台.zip

    tensorflow_protobuf-2.11.0-py3-none-any.whl

    算、文件操作、数据分析和网络编程等。Python社区提供了大量的第三方库,如NumPy、Pandas和Requests,极大地丰富了Python的应用领域,从数据科学到Web开发。Python库的丰富性是Python成为最受欢迎的编程语言之一的关键原因之一。这些库不仅为初学者提供了快速入门的途径,而且为经验丰富的开发者提供了强大的工具,以高效率、高质量地完成复杂任务。例如,Matplotlib和Seaborn库在数据可视化领域内非常受欢迎,它们提供了广泛的工具和技术,可以创建高度定制化的图表和图形,帮助数据科学家和分析师在数据探索和结果展示中更有效地传达信息。

    基于ssm的小区物业管理系统.zip

    基于ssm的小区物业管理系统.zip

    tensorflow_serving_api_gpu-2.2.0-py2.py3-none-any.whl

    Python库是一组预先编写的代码模块,旨在帮助开发者实现特定的编程任务,无需从零开始编写代码。这些库可以包括各种功能,如数学运算、文件操作、数据分析和网络编程等。Python社区提供了大量的第三方库,如NumPy、Pandas和Requests,极大地丰富了Python的应用领域,从数据科学到Web开发。Python库的丰富性是Python成为最受欢迎的编程语言之一的关键原因之一。这些库不仅为初学者提供了快速入门的途径,而且为经验丰富的开发者提供了强大的工具,以高效率、高质量地完成复杂任务。例如,Matplotlib和Seaborn库在数据可视化领域内非常受欢迎,它们提供了广泛的工具和技术,可以创建高度定制化的图表和图形,帮助数据科学家和分析师在数据探索和结果展示中更有效地传达信息。

    基于javaweb+ssm的企业人事信息管理系统.zip

    基于javaweb+ssm的企业人事信息管理系统.zip

    asp代码asp旅游信息管理系统(源代码+论文)

    asp代码asp旅游信息管理系统(源代码+论文)本资源系百度网盘分享地址

    WeRoBot-1.7.0-py3-none-any.whl

    Python库是一组预先编写的代码模块,旨在帮助开发者实现特定的编程任务,无需从零开始编写代码。这些库可以包括各种功能,如数学运算、文件操作、数据分析和网络编程等。Python社区提供了大量的第三方库,如NumPy、Pandas和Requests,极大地丰富了Python的应用领域,从数据科学到Web开发。Python库的丰富性是Python成为最受欢迎的编程语言之一的关键原因之一。这些库不仅为初学者提供了快速入门的途径,而且为经验丰富的开发者提供了强大的工具,以高效率、高质量地完成复杂任务。例如,Matplotlib和Seaborn库在数据可视化领域内非常受欢迎,它们提供了广泛的工具和技术,可以创建高度定制化的图表和图形,帮助数据科学家和分析师在数据探索和结果展示中更有效地传达信息。

    三菱PLC例程源码步进对标定位

    三菱PLC例程源码步进对标定位本资源系百度网盘分享地址

    (完整word版)单片机_温度控制系统_外文翻译_外文文献_英文文献_中英翻译.doc

    (完整word版)单片机_温度控制系统_外文翻译_外文文献_英文文献_中英翻译.doc

    tensorflow_transform-0.1.8-py2-none-any.whl

    Python库是一组预先编写的代码模块,旨在帮助开发者实现特定的编程任务,无需从零开始编写代码。这些库可以包括各种功能,如数学运算、文件操作、数据分析和网络编程等。Python社区提供了大量的第三方库,如NumPy、Pandas和Requests,极大地丰富了Python的应用领域,从数据科学到Web开发。Python库的丰富性是Python成为最受欢迎的编程语言之一的关键原因之一。这些库不仅为初学者提供了快速入门的途径,而且为经验丰富的开发者提供了强大的工具,以高效率、高质量地完成复杂任务。例如,Matplotlib和Seaborn库在数据可视化领域内非常受欢迎,它们提供了广泛的工具和技术,可以创建高度定制化的图表和图形,帮助数据科学家和分析师在数据探索和结果展示中更有效地传达信息。

    asp代码ASP基于Web的C语言教学系统的研究与实现(源代码+论文)

    asp代码ASP基于Web的C语言教学系统的研究与实现(源代码+论文)本资源系百度网盘分享地址

    三菱PLC例程源码String-32bit-Logging-Mitsubishi-cn

    三菱PLC例程源码String_32bit_Logging_Mitsubishi_cn本资源系百度网盘分享地址

Global site tag (gtag.js) - Google Analytics