
首先,servermappath的作用是把虚拟路径转化成为服务器硬盘上的绝对路径!
你是不是用反了?
还要注意,你用requestform传递的虚拟路径必须是服务端的,如果是客户端的路径,那么在服务端当然无法转化了。
String extend = workergetPhoto()substring(workergetPhoto()
lastIndexOf(""));
// 文件的扩展名
String folder = "/" + GlobeUPLOAD_FOLDER + "/" + workergetUser()getUserName() + "/" + GlobeWORKER_FOLDER + "/";
// 存放上传文件的目录
webfilecreateFolder(getBase(request) + folder);
// 创建文件夹,getBase(request);获得项目的物理路径(在磁盘中的位置)
String imgPath = folder + webtoolgetNowDate(2) + extend;// 产生新的相对路径(不带盘符的)
webfilecopyFile(getBase(request) + workergetPhoto(), getBase(request) + imgPath);// 上传文件(这里只是复制了,因为上传的事情之前已经完成了)
workersetPhoto(imgPath);
// 这个路径是插入数据库的
worker = workerDaoadd(worker);
// 给数据库插入一条记录
查询的时候就能获取插入数据库的路径了。
jasperreport ireport 获取list<map<string,object>>数据的方式是把list传入JRBeanCollectionDataSource即可。
比如:
List<Teacher> teachers= (List<Teacher>) getTeachers();
JRBeanCollectionDataSource dataSource = new JRBeanCollectionDataSource(teachers);
这个dataSource就是一个数据源,它里面保存的是Teacher information。
把Students字段添加到fields菜单下面,在属性框中设置:"Field Class“为javautilList。
一个输入切片(split)由单个map *** 作来处理。默认的使用的实现类为 FileInputFormat,这个类继承了InputFormat(有两个重要的方法:getSplits,createRecordReader)
public abstract class InputFormat<K, V> {
public abstract List<InputSplit> getSplits(JobContext context ) throws IOException, InterruptedException;
public abstract RecordReader<K,V> createRecordReader(InputSplit split, TaskAttemptContext context) throws IOException, InterruptedException;
}
运行作业的客户端通过调用getSplits()计算切片,然后将他们发送到ResourceManager(以前老版本叫jobtracker),ResourceManager 使用其存储位置信息来调度map任务从而在 NodeManager(tasktracker)上处理这些分片数据。
在NodeManager上,map任务把输入分片传给InputFormat的createRecordReader()方法来获取这个分片的RecordReadermap任务用一个RecordReader来生成记录的键值对,然后传递给Mapper类中的map函数。这里调用了Mapper的run()方法
public class Mapper<KEYIN, VALUEIN, KEYOUT, VALUEOUT> {
/
The <code>Context</code> passed on to the {@link Mapper} implementations
/
public abstract class Context
implements MapContext<KEYIN,VALUEIN,KEYOUT,VALUEOUT> {
}
/
Called once at the beginning of the task
/
protected void setup(Context context
) throws IOException, InterruptedException {
// NOTHING
}
/
Called once for each key/value pair in the input split Most applications
should override this, but the default is the identity function
/
@SuppressWarnings("unchecked")
protected void map(KEYIN key, VALUEIN value,
Context context) throws IOException, InterruptedException {
contextwrite((KEYOUT) key, (VALUEOUT) value);
}
/
Called once at the end of the task
/
protected void cleanup(Context context
) throws IOException, InterruptedException {
// NOTHING
}
/
Expert users can override this method for more complete control over the
execution of the Mapper
@param context
@throws IOException
/
`public void run`(Context context) throws IOException, InterruptedException {
setup(context);
while (contextnextKeyValue()) {
map(contextgetCurrentKey(), contextgetCurrentValue(), context);
}
cleanup(context);
}
}
运行setup()之后,再重复调用Context上的nextKeyValue()委托给RecordRader的同名函数实现来为map产生key和value对象。通过Context,key,value从RecordReader中重新取出传递给map()通过调用reader读到stream的结尾时,nextKeyValue()方法返回false,map任务运行cleanup()方法,然后结束。(实则调用的是RecordReader的子类LineRecordReader)
1
1
public class LineRecordReader extends RecordReader<LongWritable, Text> {
private static final Log LOG = LogFactorygetLog(LineRecordReaderclass);
private CompressionCodecFactory compressionCodecs = null;
private long start;
private long pos;
private long end;
private LineReader in;
private int maxLineLength;
private LongWritable key = null;
private Text value = null;
private Seekable filePosition;
private CompressionCodec codec;
private Decompressor decompressor;
private byte[] recordDelimiterBytes = null;
public LineRecordReader() {
}
public LineRecordReader(byte[] recordDelimiter) {
thisrecordDelimiterBytes = recordDelimiter;
}
public void initialize(InputSplit genericSplit,
TaskAttemptContext context) throws IOException {
FileSplit split = (FileSplit) genericSplit;
Configuration job = contextgetConfiguration();
thismaxLineLength = jobgetInt("mapredlinerecordreadermaxlength",
IntegerMAX_VALUE);
start = splitgetStart();
end = start + splitgetLength();
final Path file = splitgetPath();
compressionCodecs = new CompressionCodecFactory(job);
codec = compressionCodecsgetCodec(file);
// open the file and seek to the start of the split
FileSystem fs = filegetFileSystem(job);
FSDataInputStream fileIn = fsopen(splitgetPath());
if (isCompressedInput()) {
decompressor = CodecPoolgetDecompressor(codec);
if (codec instanceof SplittableCompressionCodec) {
final SplitCompressionInputStream cIn =
((SplittableCompressionCodec)codec)createInputStream(
fileIn, decompressor, start, end,
SplittableCompressionCodecREAD_MODEBYBLOCK);
in = new LineReader(cIn, job, recordDelimiterBytes);
start = cIngetAdjustedStart();
end = cIngetAdjustedEnd();
filePosition = cIn;
} else {
in = new LineReader(codeccreateInputStream(fileIn, decompressor), job,
recordDelimiterBytes);
filePosition = fileIn;
}
} else {
fileInseek(start);
in = new LineReader(fileIn, job, recordDelimiterBytes);
filePosition = fileIn;
}
// If this is not the first split, we always throw away first record
// because we always (except the last split) read one extra line in
// next() method
if (start != 0) {
start += inreadLine(new Text(), 0, maxBytesToConsume(start));
}
thispos = start;
}
private boolean isCompressedInput() {
return (codec != null);
}
private int maxBytesToConsume(long pos) {
return isCompressedInput()
IntegerMAX_VALUE
: (int) Mathmax(Mathmin(IntegerMAX_VALUE, end - pos), maxLineLength);
}
private long getFilePosition() throws IOException {
long retVal;
if (isCompressedInput() && null != filePosition) {
retVal = filePositiongetPos();
} else {
retVal = pos;
}
return retVal;
}
private int skipUtfByteOrderMark() throws IOException {
// Strip BOM(Byte Order Mark)
// Text only support UTF-8, we only need to check UTF-8 BOM
// (0xEF,0xBB,0xBF) at the start of the text stream
int newMaxLineLength = (int) Mathmin(3L + (long) maxLineLength,
IntegerMAX_VALUE);
int newSize = inreadLine(value, newMaxLineLength, maxBytesToConsume(pos));
// Even we read 3 extra bytes for the first line,
// we won't alter existing behavior (no backwards incompat issue)
// Because the newSize is less than maxLineLength and
// the number of bytes copied to Text is always no more than newSize
// If the return size from readLine is not less than maxLineLength,
// we will discard the current line and read the next line
pos += newSize;
int textLength = valuegetLength();
byte[] textBytes = valuegetBytes();
if ((textLength >= 3) && (textBytes[0] == (byte)0xEF) &&
(textBytes[1] == (byte)0xBB) && (textBytes[2] == (byte)0xBF)) {
// find UTF-8 BOM, strip it
LOGinfo("Found UTF-8 BOM and skipped it");
textLength -= 3;
newSize -= 3;
if (textLength > 0) {
// It may work to use the same buffer and not do the copyBytes
textBytes = valuecopyBytes();
valueset(textBytes, 3, textLength);
} else {
valueclear();
}
}
return newSize;
}
public boolean nextKeyValue() throws IOException {
if (key == null) {
key = new LongWritable();
}
keyset(pos);
if (value == null) {
value = new Text();
}
int newSize = 0;
// We always read one extra line, which lies outside the upper
// split limit ie (end - 1)
while (getFilePosition() <= end) {
if (pos == 0) {
newSize = skipUtfByteOrderMark();
} else {
newSize = inreadLine(value, maxLineLength, maxBytesToConsume(pos));
pos += newSize;
}
if ((newSize == 0) || (newSize < maxLineLength)) {
break;
}
// line too long try again
LOGinfo("Skipped line of size " + newSize + " at pos " +
(pos - newSize));
}
if (newSize == 0) {
key = null;
value = null;
return false;
} else {
return true;
}
}
@Override
public LongWritable getCurrentKey() {
return key;
}
@Override
public Text getCurrentValue() {
return value;
}
/
Get the progress within the split
/
public float getProgress() {
if (start == end) {
return 00f;
} else {
try {
return Mathmin(10f, (getFilePosition() - start)
/ (float) (end - start));
} catch (IOException ioe) {
throw new RuntimeException(ioe);
}
}
}
public synchronized void close() throws IOException {
try {
if (in != null) {
inclose();
}
} finally {
if (decompressor != null) {
CodecPoolreturnDecompressor(decompressor);
}
}
}
map任务的输出结果不写入HDFS,而是写入执行map的本地硬盘上,为什么呢?因为map的输出是中间结果,所谓中间结果,就是要经过reduce之后才会是最终结果,而且整个mapreduce执行完之后,这个结果就要被干掉了。所以,把它放在HDFS中并实现备份,这就有点大题小做了。当你执行mapreduce的时候,你可以在shell中用命令 df -lh 来查看硬盘的使用率,要想办法把这个使用率控制在90%以内。
提示NOmapDB1意思是路径不对。
修改文件夹里的“map_pathconf”,用记事本打开,把里面的路径改成正确的导航路径(即找到RtNaviexe文件的路径);比如车载导航是路畅版的,可以用TF卡也可以用U盘装载地图,如果地图文件夹RtNavi是在TF卡的根目录下,就改成\SDMMC\RtNavi;或者\storagecard\RtNavi。如果地图文件夹RtNavi在U盘的根目录下就改成\storageusb\RtNavi。
以上就是关于server.mappth(request.form("path"))这样写为什么不行呢全部的内容,包括:server.mappth(request.form("path"))这样写为什么不行呢、如何根据数据库中的路径从服务器上找到该文件并打开、jasperreport ireport 怎么获取list<map<string,object>>数据等相关内容解答,如果想了解更多相关内容,可以关注我们,你们的支持是我们更新的动力!
欢迎分享,转载请注明来源:内存溢出
微信扫一扫
支付宝扫一扫
评论列表(0条)